May 27 03:22:23.590924 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:22:23.590976 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:22:23.590996 kernel: BIOS-provided physical RAM map: May 27 03:22:23.591012 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved May 27 03:22:23.591027 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable May 27 03:22:23.591042 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved May 27 03:22:23.591065 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable May 27 03:22:23.591082 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved May 27 03:22:23.591097 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd326fff] usable May 27 03:22:23.591113 kernel: BIOS-e820: [mem 0x00000000bd327000-0x00000000bd32efff] ACPI data May 27 03:22:23.591129 kernel: BIOS-e820: [mem 0x00000000bd32f000-0x00000000bf8ecfff] usable May 27 03:22:23.591144 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved May 27 03:22:23.591160 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data May 27 03:22:23.591176 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS May 27 03:22:23.591200 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable May 27 03:22:23.591217 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved May 27 03:22:23.591235 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable May 27 03:22:23.591252 kernel: NX (Execute Disable) protection: active May 27 03:22:23.591269 kernel: APIC: Static calls initialized May 27 03:22:23.591286 kernel: efi: EFI v2.7 by EDK II May 27 03:22:23.591304 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd327018 May 27 03:22:23.591322 kernel: random: crng init done May 27 03:22:23.591343 kernel: secureboot: Secure boot disabled May 27 03:22:23.591377 kernel: SMBIOS 2.4 present. May 27 03:22:23.591394 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 May 27 03:22:23.591411 kernel: DMI: Memory slots populated: 1/1 May 27 03:22:23.591428 kernel: Hypervisor detected: KVM May 27 03:22:23.591445 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:22:23.591462 kernel: kvm-clock: using sched offset of 14493395670 cycles May 27 03:22:23.591480 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:22:23.591497 kernel: tsc: Detected 2299.998 MHz processor May 27 03:22:23.591515 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:22:23.591538 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:22:23.591556 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 May 27 03:22:23.591574 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs May 27 03:22:23.591592 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:22:23.591609 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 May 27 03:22:23.591625 kernel: Using GB pages for direct mapping May 27 03:22:23.591644 kernel: ACPI: Early table checksum verification disabled May 27 03:22:23.591662 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) May 27 03:22:23.591692 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) May 27 03:22:23.591711 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) May 27 03:22:23.591729 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) May 27 03:22:23.591747 kernel: ACPI: FACS 0x00000000BFBF2000 000040 May 27 03:22:23.591765 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) May 27 03:22:23.591784 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) May 27 03:22:23.591807 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) May 27 03:22:23.591826 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) May 27 03:22:23.591845 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) May 27 03:22:23.591863 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) May 27 03:22:23.591881 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] May 27 03:22:23.591909 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] May 27 03:22:23.591927 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] May 27 03:22:23.591945 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] May 27 03:22:23.591964 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] May 27 03:22:23.591988 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] May 27 03:22:23.592007 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] May 27 03:22:23.592025 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] May 27 03:22:23.592043 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] May 27 03:22:23.592061 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 27 03:22:23.592079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] May 27 03:22:23.592098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] May 27 03:22:23.592116 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] May 27 03:22:23.592134 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] May 27 03:22:23.592158 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] May 27 03:22:23.592177 kernel: Zone ranges: May 27 03:22:23.592196 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:22:23.592214 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 27 03:22:23.592233 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] May 27 03:22:23.592251 kernel: Device empty May 27 03:22:23.592269 kernel: Movable zone start for each node May 27 03:22:23.592288 kernel: Early memory node ranges May 27 03:22:23.592308 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] May 27 03:22:23.592331 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] May 27 03:22:23.592349 kernel: node 0: [mem 0x0000000000100000-0x00000000bd326fff] May 27 03:22:23.592394 kernel: node 0: [mem 0x00000000bd32f000-0x00000000bf8ecfff] May 27 03:22:23.592412 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] May 27 03:22:23.592431 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] May 27 03:22:23.592449 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] May 27 03:22:23.592467 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:22:23.592485 kernel: On node 0, zone DMA: 11 pages in unavailable ranges May 27 03:22:23.592504 kernel: On node 0, zone DMA: 104 pages in unavailable ranges May 27 03:22:23.592523 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges May 27 03:22:23.592548 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 03:22:23.592566 kernel: On node 0, zone Normal: 32 pages in unavailable ranges May 27 03:22:23.592584 kernel: ACPI: PM-Timer IO Port: 0xb008 May 27 03:22:23.592602 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:22:23.592619 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:22:23.592637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:22:23.592655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:22:23.592673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:22:23.592697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:22:23.592716 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:22:23.592735 kernel: CPU topo: Max. logical packages: 1 May 27 03:22:23.592755 kernel: CPU topo: Max. logical dies: 1 May 27 03:22:23.592774 kernel: CPU topo: Max. dies per package: 1 May 27 03:22:23.592794 kernel: CPU topo: Max. threads per core: 2 May 27 03:22:23.592813 kernel: CPU topo: Num. cores per package: 1 May 27 03:22:23.592832 kernel: CPU topo: Num. threads per package: 2 May 27 03:22:23.592851 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 03:22:23.592871 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 27 03:22:23.592907 kernel: Booting paravirtualized kernel on KVM May 27 03:22:23.592927 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:22:23.592947 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 03:22:23.592968 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 03:22:23.592987 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 03:22:23.593006 kernel: pcpu-alloc: [0] 0 1 May 27 03:22:23.593026 kernel: kvm-guest: PV spinlocks enabled May 27 03:22:23.593044 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:22:23.593065 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:22:23.593089 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:22:23.593107 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 27 03:22:23.593125 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:22:23.593143 kernel: Fallback order for Node 0: 0 May 27 03:22:23.593162 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 May 27 03:22:23.593179 kernel: Policy zone: Normal May 27 03:22:23.593197 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:22:23.593215 kernel: software IO TLB: area num 2. May 27 03:22:23.593252 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 03:22:23.593270 kernel: Kernel/User page tables isolation: enabled May 27 03:22:23.593288 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:22:23.593311 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:22:23.593330 kernel: Dynamic Preempt: voluntary May 27 03:22:23.593351 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:22:23.593389 kernel: rcu: RCU event tracing is enabled. May 27 03:22:23.593408 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 03:22:23.593430 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:22:23.593448 kernel: Rude variant of Tasks RCU enabled. May 27 03:22:23.593467 kernel: Tracing variant of Tasks RCU enabled. May 27 03:22:23.593485 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:22:23.593502 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 03:22:23.593518 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:22:23.593535 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:22:23.593554 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:22:23.593571 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 03:22:23.593596 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:22:23.593615 kernel: Console: colour dummy device 80x25 May 27 03:22:23.593635 kernel: printk: legacy console [ttyS0] enabled May 27 03:22:23.593654 kernel: ACPI: Core revision 20240827 May 27 03:22:23.593674 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:22:23.593694 kernel: x2apic enabled May 27 03:22:23.593714 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:22:23.593733 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 May 27 03:22:23.593752 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 27 03:22:23.593777 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) May 27 03:22:23.593797 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 May 27 03:22:23.593816 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 May 27 03:22:23.593836 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:22:23.593855 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit May 27 03:22:23.593872 kernel: Spectre V2 : Mitigation: IBRS May 27 03:22:23.593901 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:22:23.593920 kernel: RETBleed: Mitigation: IBRS May 27 03:22:23.593938 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:22:23.593961 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl May 27 03:22:23.593980 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:22:23.593998 kernel: MDS: Mitigation: Clear CPU buffers May 27 03:22:23.594017 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 27 03:22:23.594035 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 03:22:23.594054 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:22:23.594073 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:22:23.594092 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:22:23.594115 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:22:23.594133 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 27 03:22:23.594153 kernel: Freeing SMP alternatives memory: 32K May 27 03:22:23.594172 kernel: pid_max: default: 32768 minimum: 301 May 27 03:22:23.594190 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:22:23.594208 kernel: landlock: Up and running. May 27 03:22:23.594225 kernel: SELinux: Initializing. May 27 03:22:23.594243 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 03:22:23.594261 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 03:22:23.594285 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) May 27 03:22:23.594304 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. May 27 03:22:23.594322 kernel: signal: max sigframe size: 1776 May 27 03:22:23.594341 kernel: rcu: Hierarchical SRCU implementation. May 27 03:22:23.594388 kernel: rcu: Max phase no-delay instances is 400. May 27 03:22:23.594406 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:22:23.594424 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 03:22:23.594442 kernel: smp: Bringing up secondary CPUs ... May 27 03:22:23.594462 kernel: smpboot: x86: Booting SMP configuration: May 27 03:22:23.594488 kernel: .... node #0, CPUs: #1 May 27 03:22:23.594508 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 27 03:22:23.594528 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 27 03:22:23.594548 kernel: smp: Brought up 1 node, 2 CPUs May 27 03:22:23.594566 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) May 27 03:22:23.594586 kernel: Memory: 7564012K/7860552K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 290708K reserved, 0K cma-reserved) May 27 03:22:23.594606 kernel: devtmpfs: initialized May 27 03:22:23.594625 kernel: x86/mm: Memory block size: 128MB May 27 03:22:23.594643 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) May 27 03:22:23.594667 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:22:23.594686 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 03:22:23.594704 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:22:23.594723 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:22:23.594752 kernel: audit: initializing netlink subsys (disabled) May 27 03:22:23.594772 kernel: audit: type=2000 audit(1748316139.188:1): state=initialized audit_enabled=0 res=1 May 27 03:22:23.594791 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:22:23.594810 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:22:23.594834 kernel: cpuidle: using governor menu May 27 03:22:23.594853 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:22:23.594873 kernel: dca service started, version 1.12.1 May 27 03:22:23.594902 kernel: PCI: Using configuration type 1 for base access May 27 03:22:23.594922 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:22:23.594942 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:22:23.594961 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:22:23.594981 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:22:23.595001 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:22:23.595026 kernel: ACPI: Added _OSI(Module Device) May 27 03:22:23.595046 kernel: ACPI: Added _OSI(Processor Device) May 27 03:22:23.595065 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:22:23.595085 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:22:23.595105 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 27 03:22:23.595124 kernel: ACPI: Interpreter enabled May 27 03:22:23.595144 kernel: ACPI: PM: (supports S0 S3 S5) May 27 03:22:23.595164 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:22:23.595184 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:22:23.595205 kernel: PCI: Ignoring E820 reservations for host bridge windows May 27 03:22:23.595229 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F May 27 03:22:23.595249 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:22:23.595586 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 27 03:22:23.595828 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 27 03:22:23.596071 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 27 03:22:23.596096 kernel: PCI host bridge to bus 0000:00 May 27 03:22:23.596325 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:22:23.596561 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:22:23.596770 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:22:23.596983 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] May 27 03:22:23.597181 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:22:23.597441 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 27 03:22:23.597685 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint May 27 03:22:23.597956 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 27 03:22:23.598180 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 27 03:22:23.598446 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint May 27 03:22:23.598671 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 27 03:22:23.598898 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] May 27 03:22:23.599132 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:22:23.599375 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] May 27 03:22:23.599600 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] May 27 03:22:23.599829 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 03:22:23.600059 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] May 27 03:22:23.600279 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] May 27 03:22:23.600303 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:22:23.600324 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:22:23.600350 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:22:23.600384 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:22:23.600403 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 27 03:22:23.600423 kernel: iommu: Default domain type: Translated May 27 03:22:23.600443 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:22:23.600463 kernel: efivars: Registered efivars operations May 27 03:22:23.600483 kernel: PCI: Using ACPI for IRQ routing May 27 03:22:23.600504 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:22:23.600523 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] May 27 03:22:23.600548 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] May 27 03:22:23.600567 kernel: e820: reserve RAM buffer [mem 0xbd327000-0xbfffffff] May 27 03:22:23.600586 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] May 27 03:22:23.600606 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] May 27 03:22:23.600625 kernel: vgaarb: loaded May 27 03:22:23.600645 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:22:23.600664 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:22:23.600684 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:22:23.600704 kernel: pnp: PnP ACPI init May 27 03:22:23.600729 kernel: pnp: PnP ACPI: found 7 devices May 27 03:22:23.600749 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:22:23.600769 kernel: NET: Registered PF_INET protocol family May 27 03:22:23.600789 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 03:22:23.600809 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 27 03:22:23.600829 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:22:23.600849 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:22:23.600869 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 27 03:22:23.600895 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 27 03:22:23.600921 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 27 03:22:23.600941 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 27 03:22:23.600961 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:22:23.600980 kernel: NET: Registered PF_XDP protocol family May 27 03:22:23.601185 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:22:23.601399 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:22:23.601599 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:22:23.601798 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] May 27 03:22:23.602036 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 27 03:22:23.602060 kernel: PCI: CLS 0 bytes, default 64 May 27 03:22:23.602081 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 03:22:23.602100 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) May 27 03:22:23.602120 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 27 03:22:23.602141 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 27 03:22:23.602161 kernel: clocksource: Switched to clocksource tsc May 27 03:22:23.602181 kernel: Initialise system trusted keyrings May 27 03:22:23.602206 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 27 03:22:23.602226 kernel: Key type asymmetric registered May 27 03:22:23.602246 kernel: Asymmetric key parser 'x509' registered May 27 03:22:23.602266 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:22:23.602287 kernel: io scheduler mq-deadline registered May 27 03:22:23.602343 kernel: io scheduler kyber registered May 27 03:22:23.602390 kernel: io scheduler bfq registered May 27 03:22:23.602410 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:22:23.602432 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 27 03:22:23.602673 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver May 27 03:22:23.602698 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 May 27 03:22:23.602924 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver May 27 03:22:23.602949 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 27 03:22:23.603169 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver May 27 03:22:23.603192 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:22:23.603213 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:22:23.603233 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 27 03:22:23.603253 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A May 27 03:22:23.603279 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A May 27 03:22:23.603530 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) May 27 03:22:23.603556 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:22:23.603585 kernel: i8042: Warning: Keylock active May 27 03:22:23.603605 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:22:23.603625 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:22:23.603849 kernel: rtc_cmos 00:00: RTC can wake from S4 May 27 03:22:23.604072 kernel: rtc_cmos 00:00: registered as rtc0 May 27 03:22:23.604280 kernel: rtc_cmos 00:00: setting system clock to 2025-05-27T03:22:22 UTC (1748316142) May 27 03:22:23.604502 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 27 03:22:23.604525 kernel: intel_pstate: CPU model not supported May 27 03:22:23.604545 kernel: pstore: Using crash dump compression: deflate May 27 03:22:23.604566 kernel: pstore: Registered efi_pstore as persistent store backend May 27 03:22:23.604586 kernel: NET: Registered PF_INET6 protocol family May 27 03:22:23.604606 kernel: Segment Routing with IPv6 May 27 03:22:23.604625 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:22:23.604651 kernel: NET: Registered PF_PACKET protocol family May 27 03:22:23.604671 kernel: Key type dns_resolver registered May 27 03:22:23.604690 kernel: IPI shorthand broadcast: enabled May 27 03:22:23.604710 kernel: sched_clock: Marking stable (3492004169, 141771808)->(3654576565, -20800588) May 27 03:22:23.604731 kernel: registered taskstats version 1 May 27 03:22:23.604751 kernel: Loading compiled-in X.509 certificates May 27 03:22:23.604770 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:22:23.604790 kernel: Demotion targets for Node 0: null May 27 03:22:23.604809 kernel: Key type .fscrypt registered May 27 03:22:23.604833 kernel: Key type fscrypt-provisioning registered May 27 03:22:23.604853 kernel: ima: Allocated hash algorithm: sha1 May 27 03:22:23.604873 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 27 03:22:23.604900 kernel: ima: No architecture policies found May 27 03:22:23.604920 kernel: clk: Disabling unused clocks May 27 03:22:23.604940 kernel: Warning: unable to open an initial console. May 27 03:22:23.604961 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:22:23.604981 kernel: Write protecting the kernel read-only data: 24576k May 27 03:22:23.605005 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:22:23.605024 kernel: Run /init as init process May 27 03:22:23.605054 kernel: with arguments: May 27 03:22:23.605080 kernel: /init May 27 03:22:23.605097 kernel: with environment: May 27 03:22:23.605115 kernel: HOME=/ May 27 03:22:23.605134 kernel: TERM=linux May 27 03:22:23.605153 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:22:23.605173 systemd[1]: Successfully made /usr/ read-only. May 27 03:22:23.605202 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:22:23.605223 systemd[1]: Detected virtualization google. May 27 03:22:23.605242 systemd[1]: Detected architecture x86-64. May 27 03:22:23.605262 systemd[1]: Running in initrd. May 27 03:22:23.605281 systemd[1]: No hostname configured, using default hostname. May 27 03:22:23.605302 systemd[1]: Hostname set to . May 27 03:22:23.605322 systemd[1]: Initializing machine ID from random generator. May 27 03:22:23.605349 systemd[1]: Queued start job for default target initrd.target. May 27 03:22:23.605419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:22:23.605445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:22:23.605467 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:22:23.605489 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:22:23.605518 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:22:23.605545 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:22:23.605568 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:22:23.605590 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:22:23.605612 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:22:23.605633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:22:23.605655 systemd[1]: Reached target paths.target - Path Units. May 27 03:22:23.605678 systemd[1]: Reached target slices.target - Slice Units. May 27 03:22:23.605704 systemd[1]: Reached target swap.target - Swaps. May 27 03:22:23.605727 systemd[1]: Reached target timers.target - Timer Units. May 27 03:22:23.605747 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:22:23.605769 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:22:23.605789 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:22:23.605808 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:22:23.605830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:22:23.605851 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:22:23.605872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:22:23.605914 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:22:23.605936 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:22:23.605958 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:22:23.605981 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:22:23.606003 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:22:23.606026 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:22:23.606047 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:22:23.606069 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:22:23.606138 systemd-journald[207]: Collecting audit messages is disabled. May 27 03:22:23.606183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:23.606206 systemd-journald[207]: Journal started May 27 03:22:23.606250 systemd-journald[207]: Runtime Journal (/run/log/journal/735036f1162e497fb81d8b925aabc729) is 8M, max 148.9M, 140.9M free. May 27 03:22:23.622785 systemd-modules-load[209]: Inserted module 'overlay' May 27 03:22:23.638502 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:22:23.642179 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:22:23.642554 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:22:23.642872 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:22:23.654570 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:22:23.659808 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:22:23.690404 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:22:23.696470 systemd-modules-load[209]: Inserted module 'br_netfilter' May 27 03:22:23.757650 kernel: Bridge firewalling registered May 27 03:22:23.697787 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:22:23.767480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:23.772053 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:22:23.784949 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:22:23.804855 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:22:23.830061 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:22:23.850487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:22:23.872557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:22:23.916640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:22:23.921441 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:22:23.935292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:22:23.945691 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:22:23.967842 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:22:23.980536 systemd-resolved[234]: Positive Trust Anchors: May 27 03:22:23.980546 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:22:23.980588 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:22:23.984381 systemd-resolved[234]: Defaulting to hostname 'linux'. May 27 03:22:23.985729 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:22:23.986015 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:22:24.100280 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:22:24.207400 kernel: SCSI subsystem initialized May 27 03:22:24.224411 kernel: Loading iSCSI transport class v2.0-870. May 27 03:22:24.240401 kernel: iscsi: registered transport (tcp) May 27 03:22:24.273026 kernel: iscsi: registered transport (qla4xxx) May 27 03:22:24.273110 kernel: QLogic iSCSI HBA Driver May 27 03:22:24.296214 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:22:24.333534 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:22:24.336653 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:22:24.412070 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:22:24.422383 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:22:24.512407 kernel: raid6: avx2x4 gen() 18093 MB/s May 27 03:22:24.533407 kernel: raid6: avx2x2 gen() 18039 MB/s May 27 03:22:24.559391 kernel: raid6: avx2x1 gen() 14213 MB/s May 27 03:22:24.559458 kernel: raid6: using algorithm avx2x4 gen() 18093 MB/s May 27 03:22:24.586381 kernel: raid6: .... xor() 7950 MB/s, rmw enabled May 27 03:22:24.586440 kernel: raid6: using avx2x2 recovery algorithm May 27 03:22:24.615396 kernel: xor: automatically using best checksumming function avx May 27 03:22:24.803403 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:22:24.811807 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:22:24.822619 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:22:24.875952 systemd-udevd[455]: Using default interface naming scheme 'v255'. May 27 03:22:24.884718 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:22:24.908684 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:22:24.951942 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 27 03:22:24.986965 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:22:24.988766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:22:25.106466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:22:25.144478 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:22:25.214922 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues May 27 03:22:25.226395 kernel: scsi host0: Virtio SCSI HBA May 27 03:22:25.239388 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 May 27 03:22:25.252394 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:22:25.290383 kernel: AES CTR mode by8 optimization enabled May 27 03:22:25.290452 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 27 03:22:25.311699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:22:25.312288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:25.352692 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:25.374416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:25.403112 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) May 27 03:22:25.403461 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks May 27 03:22:25.403697 kernel: sd 0:0:1:0: [sda] Write Protect is off May 27 03:22:25.408854 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 May 27 03:22:25.409147 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 27 03:22:25.451896 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:22:25.451970 kernel: GPT:17805311 != 25165823 May 27 03:22:25.451994 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:22:25.452017 kernel: GPT:17805311 != 25165823 May 27 03:22:25.455469 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:22:25.455514 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:22:25.460837 kernel: sd 0:0:1:0: [sda] Attached SCSI disk May 27 03:22:25.471800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:25.562629 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. May 27 03:22:25.563132 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:22:25.604287 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. May 27 03:22:25.635640 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. May 27 03:22:25.655387 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. May 27 03:22:25.675498 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. May 27 03:22:25.685600 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:22:25.708531 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:22:25.727531 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:22:25.745621 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:22:25.754637 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:22:25.790063 disk-uuid[610]: Primary Header is updated. May 27 03:22:25.790063 disk-uuid[610]: Secondary Entries is updated. May 27 03:22:25.790063 disk-uuid[610]: Secondary Header is updated. May 27 03:22:25.814713 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:22:25.829958 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:22:25.855656 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:22:26.861486 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:22:26.862161 disk-uuid[611]: The operation has completed successfully. May 27 03:22:26.944803 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:22:26.944967 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:22:26.999229 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:22:27.031739 sh[632]: Success May 27 03:22:27.069262 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:22:27.069347 kernel: device-mapper: uevent: version 1.0.3 May 27 03:22:27.069394 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:22:27.095450 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 27 03:22:27.181008 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:22:27.184490 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:22:27.225412 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:22:27.254414 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:22:27.254501 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (644) May 27 03:22:27.284547 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:22:27.284634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:27.284666 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:22:27.401667 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:22:27.402475 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:22:27.424694 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:22:27.425765 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:22:27.434704 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:22:27.503410 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (675) May 27 03:22:27.527397 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:27.527484 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:27.527509 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:22:27.546379 kernel: BTRFS info (device sda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:27.548211 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:22:27.568074 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:22:27.599966 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:22:27.602734 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:22:27.697346 systemd-networkd[813]: lo: Link UP May 27 03:22:27.697775 systemd-networkd[813]: lo: Gained carrier May 27 03:22:27.700599 systemd-networkd[813]: Enumeration completed May 27 03:22:27.700734 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:22:27.701525 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:22:27.701532 systemd-networkd[813]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:22:27.704162 systemd-networkd[813]: eth0: Link UP May 27 03:22:27.704169 systemd-networkd[813]: eth0: Gained carrier May 27 03:22:27.704184 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:22:27.718467 systemd-networkd[813]: eth0: DHCPv4 address 10.128.0.39/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 27 03:22:27.808588 ignition[785]: Ignition 2.21.0 May 27 03:22:27.777743 systemd[1]: Reached target network.target - Network. May 27 03:22:27.808600 ignition[785]: Stage: fetch-offline May 27 03:22:27.811452 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:22:27.808638 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:27.832884 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 03:22:27.808651 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 03:22:27.808757 ignition[785]: parsed url from cmdline: "" May 27 03:22:27.891726 unknown[822]: fetched base config from "system" May 27 03:22:27.808762 ignition[785]: no config URL provided May 27 03:22:27.891737 unknown[822]: fetched base config from "system" May 27 03:22:27.808770 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:22:27.891747 unknown[822]: fetched user config from "gcp" May 27 03:22:27.808781 ignition[785]: no config at "/usr/lib/ignition/user.ign" May 27 03:22:27.894490 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 03:22:27.808789 ignition[785]: failed to fetch config: resource requires networking May 27 03:22:27.906372 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:22:27.809094 ignition[785]: Ignition finished successfully May 27 03:22:27.958155 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:22:27.878308 ignition[822]: Ignition 2.21.0 May 27 03:22:27.963842 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:22:27.878316 ignition[822]: Stage: fetch May 27 03:22:28.023440 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:22:27.878536 ignition[822]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:28.036219 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:22:27.878548 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 03:22:28.050500 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:22:27.878664 ignition[822]: parsed url from cmdline: "" May 27 03:22:28.067650 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:22:27.878669 ignition[822]: no config URL provided May 27 03:22:28.074709 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:22:27.878676 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:22:28.091675 systemd[1]: Reached target basic.target - Basic System. May 27 03:22:27.878685 ignition[822]: no config at "/usr/lib/ignition/user.ign" May 27 03:22:28.110071 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:22:27.878722 ignition[822]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 May 27 03:22:27.882614 ignition[822]: GET result: OK May 27 03:22:27.882745 ignition[822]: parsing config with SHA512: c3f5c01736d6b8b938aaa673ec41542c2e186adc2f47c2bd8aec5924d2c4c89193f35f8e20f1c4bdec3b3fe1648dd09e54b629ef928a4f8ab3164080ee7d706d May 27 03:22:27.892123 ignition[822]: fetch: fetch complete May 27 03:22:27.892130 ignition[822]: fetch: fetch passed May 27 03:22:27.892178 ignition[822]: Ignition finished successfully May 27 03:22:27.952577 ignition[828]: Ignition 2.21.0 May 27 03:22:27.952586 ignition[828]: Stage: kargs May 27 03:22:27.952797 ignition[828]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:27.952809 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 03:22:27.956087 ignition[828]: kargs: kargs passed May 27 03:22:27.956182 ignition[828]: Ignition finished successfully May 27 03:22:28.021225 ignition[835]: Ignition 2.21.0 May 27 03:22:28.021233 ignition[835]: Stage: disks May 27 03:22:28.021412 ignition[835]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:28.021428 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 03:22:28.022245 ignition[835]: disks: disks passed May 27 03:22:28.022303 ignition[835]: Ignition finished successfully May 27 03:22:28.180940 systemd-fsck[843]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 27 03:22:28.291325 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:22:28.293333 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:22:28.513888 kernel: EXT4-fs (sda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:22:28.515033 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:22:28.515858 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:22:28.531120 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:22:28.555059 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:22:28.569094 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:22:28.608207 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (851) May 27 03:22:28.608249 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:28.608273 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:28.569180 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:22:28.621917 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:22:28.569218 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:22:28.650774 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:22:28.664838 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:22:28.680940 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:22:28.813498 initrd-setup-root[875]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:22:28.823528 initrd-setup-root[882]: cut: /sysroot/etc/group: No such file or directory May 27 03:22:28.832488 initrd-setup-root[889]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:22:28.841511 initrd-setup-root[896]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:22:28.987273 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:22:28.998348 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:22:29.013594 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:22:29.047588 kernel: BTRFS info (device sda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:29.039239 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:22:29.096019 ignition[964]: INFO : Ignition 2.21.0 May 27 03:22:29.103598 ignition[964]: INFO : Stage: mount May 27 03:22:29.103598 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:22:29.103598 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 03:22:29.103598 ignition[964]: INFO : mount: mount passed May 27 03:22:29.103598 ignition[964]: INFO : Ignition finished successfully May 27 03:22:29.097475 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:22:29.104016 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:22:29.119049 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:22:29.124548 systemd-networkd[813]: eth0: Gained IPv6LL May 27 03:22:29.178341 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:22:29.228460 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (976) May 27 03:22:29.246077 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:29.246164 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:29.246189 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:22:29.259557 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:22:29.300316 ignition[993]: INFO : Ignition 2.21.0 May 27 03:22:29.300316 ignition[993]: INFO : Stage: files May 27 03:22:29.312555 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:22:29.312555 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 03:22:29.312555 ignition[993]: DEBUG : files: compiled without relabeling support, skipping May 27 03:22:29.312555 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:22:29.312555 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:22:29.312555 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:22:29.312555 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:22:29.312555 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:22:29.310095 unknown[993]: wrote ssh authorized keys file for user: core May 27 03:22:29.406513 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:22:29.406513 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 03:22:29.438508 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:22:29.587952 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:22:29.603524 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 03:22:30.169800 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:22:30.508052 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:22:30.508052 ignition[993]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:22:30.544543 ignition[993]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:22:30.544543 ignition[993]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:22:30.544543 ignition[993]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:22:30.544543 ignition[993]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 03:22:30.544543 ignition[993]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:22:30.544543 ignition[993]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:22:30.544543 ignition[993]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:22:30.544543 ignition[993]: INFO : files: files passed May 27 03:22:30.544543 ignition[993]: INFO : Ignition finished successfully May 27 03:22:30.516067 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:22:30.527239 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:22:30.583632 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:22:30.592984 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:22:30.741541 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:22:30.741541 initrd-setup-root-after-ignition[1022]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:22:30.593105 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:22:30.776667 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:22:30.674951 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:22:30.692879 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:22:30.711625 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:22:30.775787 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:22:30.775910 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:22:30.786726 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:22:30.808645 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:22:30.827659 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:22:30.828826 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:22:30.876839 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:22:30.897186 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:22:30.923838 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:22:30.935725 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:22:30.955940 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:22:30.974717 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:22:30.974925 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:22:31.003825 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:22:31.021732 systemd[1]: Stopped target basic.target - Basic System. May 27 03:22:31.037749 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:22:31.046872 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:22:31.064924 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:22:31.081931 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:22:31.115019 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:22:31.123934 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:22:31.150824 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:22:31.159841 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:22:31.175922 systemd[1]: Stopped target swap.target - Swaps. May 27 03:22:31.192834 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:22:31.193053 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:22:31.229884 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:22:31.237879 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:22:31.254825 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:22:31.255082 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:22:31.273848 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:22:31.274047 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:22:31.310834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:22:31.311088 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:22:31.322956 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:22:31.323246 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:22:31.341310 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:22:31.357132 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:22:31.440584 ignition[1047]: INFO : Ignition 2.21.0 May 27 03:22:31.440584 ignition[1047]: INFO : Stage: umount May 27 03:22:31.440584 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:22:31.440584 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 27 03:22:31.440584 ignition[1047]: INFO : umount: umount passed May 27 03:22:31.440584 ignition[1047]: INFO : Ignition finished successfully May 27 03:22:31.404558 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:22:31.404879 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:22:31.415881 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:22:31.416066 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:22:31.444944 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:22:31.446589 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:22:31.446863 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:22:31.462229 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:22:31.462420 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:22:31.483390 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:22:31.483556 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:22:31.502787 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:22:31.502850 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:22:31.507741 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:22:31.507813 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:22:31.523836 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 03:22:31.523910 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 03:22:31.548700 systemd[1]: Stopped target network.target - Network. May 27 03:22:31.564682 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:22:31.564778 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:22:31.573752 systemd[1]: Stopped target paths.target - Path Units. May 27 03:22:31.590779 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:22:31.596485 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:22:31.604738 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:22:31.621727 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:22:31.635794 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:22:31.635875 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:22:31.649761 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:22:31.649838 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:22:31.663757 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:22:31.663842 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:22:31.688749 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:22:31.688824 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:22:31.698728 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:22:31.698805 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:22:31.715025 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:22:31.739679 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:22:31.747242 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:22:31.747402 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:22:31.766047 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:22:31.766317 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:22:31.766515 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:22:31.789201 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:22:31.791076 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:22:31.805663 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:22:31.805724 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:22:31.823812 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:22:31.847522 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:22:31.847747 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:22:31.885656 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:22:31.885748 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:22:31.901827 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:22:31.901909 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:22:31.917603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:22:31.917707 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:22:31.934808 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:22:31.954014 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:22:32.378509 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). May 27 03:22:31.954213 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:22:31.954737 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:22:31.954902 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:22:31.961730 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:22:31.961859 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:22:31.985679 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:22:31.985737 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:22:32.003724 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:22:32.003797 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:22:32.038749 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:22:32.039018 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:22:32.065794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:22:32.065884 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:22:32.094828 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:22:32.110516 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:22:32.110632 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:22:32.129891 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:22:32.129967 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:22:32.167770 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 03:22:32.167838 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:22:32.187780 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:22:32.187869 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:22:32.206563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:22:32.206671 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:32.227324 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 03:22:32.227428 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 03:22:32.227474 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 03:22:32.227519 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:22:32.228032 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:22:32.228148 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:22:32.244052 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:22:32.244171 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:22:32.253774 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:22:32.270887 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:22:32.324226 systemd[1]: Switching root. May 27 03:22:32.713537 systemd-journald[207]: Journal stopped May 27 03:22:35.260793 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:22:35.260842 kernel: SELinux: policy capability open_perms=1 May 27 03:22:35.260864 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:22:35.260882 kernel: SELinux: policy capability always_check_network=0 May 27 03:22:35.260900 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:22:35.260918 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:22:35.260942 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:22:35.260960 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:22:35.260977 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:22:35.260996 kernel: audit: type=1403 audit(1748316153.007:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:22:35.261017 systemd[1]: Successfully loaded SELinux policy in 105.150ms. May 27 03:22:35.261039 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.777ms. May 27 03:22:35.261061 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:22:35.261085 systemd[1]: Detected virtualization google. May 27 03:22:35.261106 systemd[1]: Detected architecture x86-64. May 27 03:22:35.261127 systemd[1]: Detected first boot. May 27 03:22:35.261148 systemd[1]: Initializing machine ID from random generator. May 27 03:22:35.261169 zram_generator::config[1090]: No configuration found. May 27 03:22:35.261195 kernel: Guest personality initialized and is inactive May 27 03:22:35.261214 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:22:35.261234 kernel: Initialized host personality May 27 03:22:35.261253 kernel: NET: Registered PF_VSOCK protocol family May 27 03:22:35.261273 systemd[1]: Populated /etc with preset unit settings. May 27 03:22:35.261295 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:22:35.261315 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:22:35.261338 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:22:35.263416 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:22:35.263451 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:22:35.263476 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:22:35.263498 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:22:35.263519 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:22:35.263541 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:22:35.263569 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:22:35.263589 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:22:35.263611 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:22:35.263633 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:22:35.263656 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:22:35.263679 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:22:35.263700 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:22:35.263722 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:22:35.263751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:22:35.263777 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:22:35.263800 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:22:35.263824 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:22:35.263846 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:22:35.263868 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:22:35.263890 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:22:35.263913 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:22:35.263940 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:22:35.263964 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:22:35.263986 systemd[1]: Reached target slices.target - Slice Units. May 27 03:22:35.264008 systemd[1]: Reached target swap.target - Swaps. May 27 03:22:35.264030 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:22:35.264051 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:22:35.264073 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:22:35.264101 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:22:35.264124 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:22:35.264146 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:22:35.264169 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:22:35.264192 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:22:35.264215 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:22:35.264241 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:22:35.264264 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:35.264287 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:22:35.264311 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:22:35.264333 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:22:35.264405 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:22:35.264431 systemd[1]: Reached target machines.target - Containers. May 27 03:22:35.264454 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:22:35.264483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:22:35.264505 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:22:35.264528 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:22:35.264551 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:22:35.264573 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:22:35.264596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:22:35.264618 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:22:35.264641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:22:35.264664 kernel: fuse: init (API version 7.41) May 27 03:22:35.264690 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:22:35.264712 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:22:35.264734 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:22:35.264758 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:22:35.264781 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:22:35.264805 kernel: ACPI: bus type drm_connector registered May 27 03:22:35.264828 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:22:35.264851 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:22:35.264878 kernel: loop: module loaded May 27 03:22:35.264900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:22:35.264923 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:22:35.264992 systemd-journald[1178]: Collecting audit messages is disabled. May 27 03:22:35.265043 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:22:35.265068 systemd-journald[1178]: Journal started May 27 03:22:35.265109 systemd-journald[1178]: Runtime Journal (/run/log/journal/e9a7b653857f41c39f52f91a0a43198c) is 8M, max 148.9M, 140.9M free. May 27 03:22:34.053992 systemd[1]: Queued start job for default target multi-user.target. May 27 03:22:34.079205 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 27 03:22:34.079918 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:22:35.294450 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:22:35.324967 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:22:35.325076 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:22:35.331548 systemd[1]: Stopped verity-setup.service. May 27 03:22:35.362388 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:35.373406 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:22:35.383018 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:22:35.391693 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:22:35.400707 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:22:35.409710 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:22:35.418711 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:22:35.427709 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:22:35.437017 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:22:35.448039 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:22:35.458988 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:22:35.459511 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:22:35.469886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:22:35.470180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:22:35.480800 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:22:35.481068 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:22:35.489837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:22:35.490110 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:22:35.500874 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:22:35.501157 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:22:35.510907 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:22:35.511201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:22:35.521042 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:22:35.531961 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:22:35.542917 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:22:35.553931 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:22:35.565077 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:22:35.588747 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:22:35.599354 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:22:35.616485 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:22:35.625535 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:22:35.625746 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:22:35.635774 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:22:35.646818 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:22:35.655713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:22:35.666737 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:22:35.681875 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:22:35.695855 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:22:35.699861 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:22:35.708394 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:22:35.718574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:22:35.730321 systemd-journald[1178]: Time spent on flushing to /var/log/journal/e9a7b653857f41c39f52f91a0a43198c is 45.588ms for 952 entries. May 27 03:22:35.730321 systemd-journald[1178]: System Journal (/var/log/journal/e9a7b653857f41c39f52f91a0a43198c) is 8M, max 584.8M, 576.8M free. May 27 03:22:35.822186 systemd-journald[1178]: Received client request to flush runtime journal. May 27 03:22:35.738661 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:22:35.746677 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:22:35.756948 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:22:35.772752 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:22:35.783986 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:22:35.800345 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:22:35.816932 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:22:35.836568 kernel: loop0: detected capacity change from 0 to 113872 May 27 03:22:35.836443 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:22:35.860033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:22:35.889004 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:22:35.891693 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:22:35.893455 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. May 27 03:22:35.893488 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. May 27 03:22:35.904425 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:22:35.912401 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:22:35.926636 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:22:35.941475 kernel: loop1: detected capacity change from 0 to 224512 May 27 03:22:36.012807 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:22:36.025977 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:22:36.049754 kernel: loop2: detected capacity change from 0 to 146240 May 27 03:22:36.099107 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. May 27 03:22:36.099707 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. May 27 03:22:36.113007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:22:36.171385 kernel: loop3: detected capacity change from 0 to 52072 May 27 03:22:36.254440 kernel: loop4: detected capacity change from 0 to 113872 May 27 03:22:36.299454 kernel: loop5: detected capacity change from 0 to 224512 May 27 03:22:36.359397 kernel: loop6: detected capacity change from 0 to 146240 May 27 03:22:36.417395 kernel: loop7: detected capacity change from 0 to 52072 May 27 03:22:36.451133 (sd-merge)[1238]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. May 27 03:22:36.453745 (sd-merge)[1238]: Merged extensions into '/usr'. May 27 03:22:36.464548 systemd[1]: Reload requested from client PID 1213 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:22:36.464942 systemd[1]: Reloading... May 27 03:22:36.637765 zram_generator::config[1267]: No configuration found. May 27 03:22:36.836212 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:22:36.857387 ldconfig[1208]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:22:37.040906 systemd[1]: Reloading finished in 575 ms. May 27 03:22:37.054527 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:22:37.064164 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:22:37.095537 systemd[1]: Starting ensure-sysext.service... May 27 03:22:37.105681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:22:37.139022 systemd[1]: Reload requested from client PID 1304 ('systemctl') (unit ensure-sysext.service)... May 27 03:22:37.139222 systemd[1]: Reloading... May 27 03:22:37.151966 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:22:37.152483 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:22:37.153103 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:22:37.153736 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:22:37.155981 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:22:37.157140 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. May 27 03:22:37.157353 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. May 27 03:22:37.184607 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:22:37.184627 systemd-tmpfiles[1305]: Skipping /boot May 27 03:22:37.235305 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:22:37.236587 zram_generator::config[1328]: No configuration found. May 27 03:22:37.235334 systemd-tmpfiles[1305]: Skipping /boot May 27 03:22:37.403745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:22:37.516623 systemd[1]: Reloading finished in 376 ms. May 27 03:22:37.531721 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:22:37.554053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:22:37.573498 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:22:37.592982 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:22:37.606947 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:22:37.627423 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:22:37.639429 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:22:37.652353 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:22:37.671549 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:37.673035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:22:37.680230 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:22:37.693461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:22:37.707961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:22:37.716619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:22:37.717113 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:22:37.722042 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:22:37.722885 augenrules[1402]: No rules May 27 03:22:37.730473 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:37.734508 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:22:37.734881 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:22:37.745645 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:22:37.751649 systemd-udevd[1388]: Using default interface naming scheme 'v255'. May 27 03:22:37.757448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:22:37.760455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:22:37.777745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:22:37.778833 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:22:37.791482 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:22:37.791763 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:22:37.811983 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:22:37.831240 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:22:37.844562 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:22:37.859650 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:37.860537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:22:37.867514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:22:37.880424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:22:37.896499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:22:37.905605 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:22:37.905824 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:22:37.913504 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:22:37.927098 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:22:37.935480 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:22:37.935679 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:37.946040 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:22:37.957677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:22:37.957980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:22:37.969449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:22:37.970063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:22:37.982619 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:22:37.982922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:22:38.034228 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:22:38.046430 systemd[1]: Finished ensure-sysext.service. May 27 03:22:38.068485 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:38.073802 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:22:38.081844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:22:38.088654 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:22:38.098851 systemd-resolved[1384]: Positive Trust Anchors: May 27 03:22:38.101507 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:22:38.101701 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:22:38.109585 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:22:38.122606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:22:38.129154 systemd-resolved[1384]: Defaulting to hostname 'linux'. May 27 03:22:38.135673 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:22:38.146626 systemd[1]: Starting setup-oem.service - Setup OEM... May 27 03:22:38.154650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:22:38.155534 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:22:38.155644 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:22:38.164524 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:22:38.164573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:38.174799 (udev-worker)[1434]: loop6: Failed to create/update device symlink '/dev/disk/by-loop-inode/0:43-32752', ignoring: No such file or directory May 27 03:22:38.179419 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:22:38.189315 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:22:38.190661 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:22:38.192518 augenrules[1460]: /sbin/augenrules: No change May 27 03:22:38.200145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:22:38.200985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:22:38.210920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:22:38.211283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:22:38.222891 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:22:38.223943 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:22:38.224352 augenrules[1488]: No rules May 27 03:22:38.232987 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:22:38.233330 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:22:38.265999 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. May 27 03:22:38.267930 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:22:38.268205 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:22:38.278602 systemd[1]: Reached target tpm2.target - Trusted Platform Module. May 27 03:22:38.287552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:22:38.287616 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:22:38.302383 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:22:38.306933 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:22:38.317567 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:22:38.327607 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:22:38.337742 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:22:38.346710 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:22:38.357530 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:22:38.367519 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:22:38.367580 systemd[1]: Reached target paths.target - Path Units. May 27 03:22:38.376488 systemd[1]: Reached target timers.target - Timer Units. May 27 03:22:38.389090 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:22:38.402618 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:22:38.418836 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:22:38.429773 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:22:38.440630 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:22:38.444859 systemd-networkd[1450]: lo: Link UP May 27 03:22:38.446400 systemd-networkd[1450]: lo: Gained carrier May 27 03:22:38.450627 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:22:38.452594 systemd-networkd[1450]: Enumeration completed May 27 03:22:38.454304 systemd-networkd[1450]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:22:38.455059 systemd-networkd[1450]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:22:38.455900 systemd-networkd[1450]: eth0: Link UP May 27 03:22:38.458012 systemd-networkd[1450]: eth0: Gained carrier May 27 03:22:38.458043 systemd-networkd[1450]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:22:38.460803 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:22:38.462264 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:22:38.469452 systemd-networkd[1450]: eth0: DHCPv4 address 10.128.0.39/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 27 03:22:38.472566 systemd[1]: Finished setup-oem.service - Setup OEM. May 27 03:22:38.481724 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:22:38.498387 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 27 03:22:38.518032 kernel: ACPI: button: Power Button [PWRF] May 27 03:22:38.549402 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 May 27 03:22:38.576524 systemd[1]: Reached target network.target - Network. May 27 03:22:38.587649 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... May 27 03:22:38.598425 kernel: ACPI: button: Sleep Button [SLPF] May 27 03:22:38.629925 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 27 03:22:38.637659 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:22:38.649308 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:22:38.667394 kernel: EDAC MC: Ver: 3.0.0 May 27 03:22:38.717096 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:22:38.739450 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. May 27 03:22:38.763733 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:22:38.772534 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:22:38.781526 systemd[1]: Reached target basic.target - Basic System. May 27 03:22:38.790646 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:22:38.790703 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:22:38.795541 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:22:38.808735 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 03:22:38.820670 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:22:38.830529 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:22:38.845559 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:22:38.857805 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:22:38.866484 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:22:38.868978 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:22:38.880621 jq[1540]: false May 27 03:22:38.891968 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:22:38.903659 systemd[1]: Started ntpd.service - Network Time Service. May 27 03:22:38.909582 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:22:38.932650 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:22:38.948647 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:22:38.977668 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:22:38.988014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). May 27 03:22:38.988866 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:22:38.991556 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:22:39.001596 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:22:39.018436 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:22:39.029496 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:22:39.030576 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:22:39.031658 oslogin_cache_refresh[1542]: Refreshing passwd entry cache May 27 03:22:39.037869 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Refreshing passwd entry cache May 27 03:22:39.056720 jq[1555]: true May 27 03:22:39.061023 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:22:39.062186 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:22:39.096346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. May 27 03:22:39.102410 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Failure getting users, quitting May 27 03:22:39.102410 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:22:39.102410 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Refreshing group entry cache May 27 03:22:39.102596 update_engine[1554]: I20250527 03:22:39.101891 1554 main.cc:92] Flatcar Update Engine starting May 27 03:22:39.101626 oslogin_cache_refresh[1542]: Failure getting users, quitting May 27 03:22:39.101676 oslogin_cache_refresh[1542]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:22:39.101733 oslogin_cache_refresh[1542]: Refreshing group entry cache May 27 03:22:39.109575 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Failure getting groups, quitting May 27 03:22:39.109575 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:22:39.109012 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:22:39.107043 oslogin_cache_refresh[1542]: Failure getting groups, quitting May 27 03:22:39.109332 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:22:39.107061 oslogin_cache_refresh[1542]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:22:39.119038 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:22:39.119413 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:22:39.132978 extend-filesystems[1541]: Found loop4 May 27 03:22:39.132978 extend-filesystems[1541]: Found loop5 May 27 03:22:39.132978 extend-filesystems[1541]: Found loop6 May 27 03:22:39.132978 extend-filesystems[1541]: Found loop7 May 27 03:22:39.132978 extend-filesystems[1541]: Found sda May 27 03:22:39.132978 extend-filesystems[1541]: Found sda1 May 27 03:22:39.132978 extend-filesystems[1541]: Found sda2 May 27 03:22:39.132978 extend-filesystems[1541]: Found sda3 May 27 03:22:39.132978 extend-filesystems[1541]: Found usr May 27 03:22:39.132978 extend-filesystems[1541]: Found sda4 May 27 03:22:39.132978 extend-filesystems[1541]: Found sda6 May 27 03:22:39.222390 extend-filesystems[1541]: Found sda7 May 27 03:22:39.222390 extend-filesystems[1541]: Found sda9 May 27 03:22:39.222390 extend-filesystems[1541]: Checking size of /dev/sda9 May 27 03:22:39.222390 extend-filesystems[1541]: Resized partition /dev/sda9 May 27 03:22:39.280559 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks May 27 03:22:39.207329 ntpd[1545]: ntpd 4.2.8p17@1.4004-o Tue May 27 00:37:40 UTC 2025 (1): Starting May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: ntpd 4.2.8p17@1.4004-o Tue May 27 00:37:40 UTC 2025 (1): Starting May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: ---------------------------------------------------- May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: ntp-4 is maintained by Network Time Foundation, May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: corporation. Support and training for ntp-4 are May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: available at https://www.nwtime.org/support May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: ---------------------------------------------------- May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: proto: precision = 0.088 usec (-23) May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: basedate set to 2025-05-15 May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: gps base set to 2025-05-18 (week 2367) May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: Listen and drop on 0 v6wildcard [::]:123 May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: Listen normally on 2 lo 127.0.0.1:123 May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: Listen normally on 3 eth0 10.128.0.39:123 May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: Listen normally on 4 lo [::1]:123 May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: bind(21) AF_INET6 fe80::4001:aff:fe80:27%2#123 flags 0x11 failed: Cannot assign requested address May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:27%2#123 May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: failed to init interface for address fe80::4001:aff:fe80:27%2 May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: Listening on routing socket on fd #21 for interface updates May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:22:39.280990 ntpd[1545]: 27 May 03:22:39 ntpd[1545]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:22:39.166430 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:22:39.288012 coreos-metadata[1537]: May 27 03:22:39.218 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 May 27 03:22:39.288012 coreos-metadata[1537]: May 27 03:22:39.226 INFO Fetch successful May 27 03:22:39.288012 coreos-metadata[1537]: May 27 03:22:39.226 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 May 27 03:22:39.288012 coreos-metadata[1537]: May 27 03:22:39.227 INFO Fetch successful May 27 03:22:39.288012 coreos-metadata[1537]: May 27 03:22:39.227 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 May 27 03:22:39.288012 coreos-metadata[1537]: May 27 03:22:39.229 INFO Fetch successful May 27 03:22:39.288012 coreos-metadata[1537]: May 27 03:22:39.230 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 May 27 03:22:39.288012 coreos-metadata[1537]: May 27 03:22:39.250 INFO Fetch successful May 27 03:22:39.310066 tar[1561]: linux-amd64/LICENSE May 27 03:22:39.310066 tar[1561]: linux-amd64/helm May 27 03:22:39.310494 extend-filesystems[1587]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:22:39.360153 kernel: EXT4-fs (sda9): resized filesystem to 2538491 May 27 03:22:39.207378 ntpd[1545]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 03:22:39.216824 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:22:39.207394 ntpd[1545]: ---------------------------------------------------- May 27 03:22:39.230528 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:22:39.384134 extend-filesystems[1587]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 27 03:22:39.384134 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 2 May 27 03:22:39.384134 extend-filesystems[1587]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. May 27 03:22:39.207408 ntpd[1545]: ntp-4 is maintained by Network Time Foundation, May 27 03:22:39.239819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:39.422935 jq[1564]: true May 27 03:22:39.427636 extend-filesystems[1541]: Resized filesystem in /dev/sda9 May 27 03:22:39.207420 ntpd[1545]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 03:22:39.390584 systemd-logind[1551]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:22:39.207434 ntpd[1545]: corporation. Support and training for ntp-4 are May 27 03:22:39.390615 systemd-logind[1551]: Watching system buttons on /dev/input/event3 (Sleep Button) May 27 03:22:39.207448 ntpd[1545]: available at https://www.nwtime.org/support May 27 03:22:39.390647 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:22:39.207462 ntpd[1545]: ---------------------------------------------------- May 27 03:22:39.391774 systemd-logind[1551]: New seat seat0. May 27 03:22:39.217841 ntpd[1545]: proto: precision = 0.088 usec (-23) May 27 03:22:39.393633 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:22:39.219898 ntpd[1545]: basedate set to 2025-05-15 May 27 03:22:39.405939 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:22:39.219970 ntpd[1545]: gps base set to 2025-05-18 (week 2367) May 27 03:22:39.406314 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:22:39.224720 ntpd[1545]: Listen and drop on 0 v6wildcard [::]:123 May 27 03:22:39.462807 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 03:22:39.224785 ntpd[1545]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 03:22:39.225036 ntpd[1545]: Listen normally on 2 lo 127.0.0.1:123 May 27 03:22:39.225233 ntpd[1545]: Listen normally on 3 eth0 10.128.0.39:123 May 27 03:22:39.225296 ntpd[1545]: Listen normally on 4 lo [::1]:123 May 27 03:22:39.225816 ntpd[1545]: bind(21) AF_INET6 fe80::4001:aff:fe80:27%2#123 flags 0x11 failed: Cannot assign requested address May 27 03:22:39.225859 ntpd[1545]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:27%2#123 May 27 03:22:39.225899 ntpd[1545]: failed to init interface for address fe80::4001:aff:fe80:27%2 May 27 03:22:39.225949 ntpd[1545]: Listening on routing socket on fd #21 for interface updates May 27 03:22:39.231673 ntpd[1545]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:22:39.231726 ntpd[1545]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:22:39.496853 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:22:39.513132 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:22:39.546552 bash[1618]: Updated "/home/core/.ssh/authorized_keys" May 27 03:22:39.551495 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:22:39.557560 systemd[1]: Starting sshkeys.service... May 27 03:22:39.587621 dbus-daemon[1538]: [system] SELinux support is enabled May 27 03:22:39.592219 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:22:39.599936 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:22:39.600440 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:22:39.600605 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:22:39.600630 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:22:39.615559 dbus-daemon[1538]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 03:22:39.627150 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 03:22:39.629441 update_engine[1554]: I20250527 03:22:39.628868 1554 update_check_scheduler.cc:74] Next update check in 7m7s May 27 03:22:39.630265 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 03:22:39.630849 systemd[1]: Started update-engine.service - Update Engine. May 27 03:22:39.646577 dbus-daemon[1538]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1450 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 03:22:39.648239 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:22:39.657784 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 03:22:39.702944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:39.741225 coreos-metadata[1621]: May 27 03:22:39.740 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 May 27 03:22:39.746495 coreos-metadata[1621]: May 27 03:22:39.746 INFO Fetch failed with 404: resource not found May 27 03:22:39.746495 coreos-metadata[1621]: May 27 03:22:39.746 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 May 27 03:22:39.749140 coreos-metadata[1621]: May 27 03:22:39.748 INFO Fetch successful May 27 03:22:39.749140 coreos-metadata[1621]: May 27 03:22:39.749 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 May 27 03:22:39.749498 coreos-metadata[1621]: May 27 03:22:39.749 INFO Fetch failed with 404: resource not found May 27 03:22:39.749668 coreos-metadata[1621]: May 27 03:22:39.749 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 May 27 03:22:39.755389 coreos-metadata[1621]: May 27 03:22:39.750 INFO Fetch failed with 404: resource not found May 27 03:22:39.755389 coreos-metadata[1621]: May 27 03:22:39.753 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 May 27 03:22:39.758330 coreos-metadata[1621]: May 27 03:22:39.757 INFO Fetch successful May 27 03:22:39.762163 unknown[1621]: wrote ssh authorized keys file for user: core May 27 03:22:39.844222 update-ssh-keys[1633]: Updated "/home/core/.ssh/authorized_keys" May 27 03:22:39.845471 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 03:22:39.863451 systemd[1]: Finished sshkeys.service. May 27 03:22:39.980325 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 03:22:39.983946 dbus-daemon[1538]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 03:22:39.988447 dbus-daemon[1538]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1623 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 03:22:40.003430 systemd[1]: Starting polkit.service - Authorization Manager... May 27 03:22:40.123021 locksmithd[1622]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:22:40.196609 systemd-networkd[1450]: eth0: Gained IPv6LL May 27 03:22:40.212449 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:22:40.223301 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:22:40.240486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:40.245314 containerd[1566]: time="2025-05-27T03:22:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:22:40.254528 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:22:40.257962 containerd[1566]: time="2025-05-27T03:22:40.257905442Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:22:40.267739 systemd[1]: Starting oem-gce.service - GCE Linux Agent... May 27 03:22:40.274334 polkitd[1640]: Started polkitd version 126 May 27 03:22:40.296106 init.sh[1646]: + '[' -e /etc/default/instance_configs.cfg.template ']' May 27 03:22:40.296106 init.sh[1646]: + echo -e '[InstanceSetup]\nset_host_keys = false' May 27 03:22:40.299992 init.sh[1646]: + /usr/bin/google_instance_setup May 27 03:22:40.311780 polkitd[1640]: Loading rules from directory /etc/polkit-1/rules.d May 27 03:22:40.316870 polkitd[1640]: Loading rules from directory /run/polkit-1/rules.d May 27 03:22:40.316943 polkitd[1640]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:22:40.322872 polkitd[1640]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 03:22:40.322928 polkitd[1640]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:22:40.323005 polkitd[1640]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 03:22:40.326232 polkitd[1640]: Finished loading, compiling and executing 2 rules May 27 03:22:40.326611 systemd[1]: Started polkit.service - Authorization Manager. May 27 03:22:40.329309 dbus-daemon[1538]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 03:22:40.331496 polkitd[1640]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 03:22:40.357751 containerd[1566]: time="2025-05-27T03:22:40.357694179Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.333µs" May 27 03:22:40.357972 containerd[1566]: time="2025-05-27T03:22:40.357943975Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:22:40.358075 containerd[1566]: time="2025-05-27T03:22:40.358056596Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:22:40.358616 containerd[1566]: time="2025-05-27T03:22:40.358588845Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:22:40.359686 containerd[1566]: time="2025-05-27T03:22:40.358977209Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:22:40.359686 containerd[1566]: time="2025-05-27T03:22:40.359033918Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:22:40.359686 containerd[1566]: time="2025-05-27T03:22:40.359138564Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:22:40.359686 containerd[1566]: time="2025-05-27T03:22:40.359156085Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:22:40.362254 containerd[1566]: time="2025-05-27T03:22:40.361624639Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:22:40.362254 containerd[1566]: time="2025-05-27T03:22:40.361660337Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:22:40.362254 containerd[1566]: time="2025-05-27T03:22:40.361681307Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:22:40.362254 containerd[1566]: time="2025-05-27T03:22:40.361697995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:22:40.362254 containerd[1566]: time="2025-05-27T03:22:40.361871918Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:22:40.362254 containerd[1566]: time="2025-05-27T03:22:40.362202986Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:22:40.364005 containerd[1566]: time="2025-05-27T03:22:40.363972080Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:22:40.367562 containerd[1566]: time="2025-05-27T03:22:40.365877710Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:22:40.367562 containerd[1566]: time="2025-05-27T03:22:40.365998685Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:22:40.367916 containerd[1566]: time="2025-05-27T03:22:40.367840013Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:22:40.368832 containerd[1566]: time="2025-05-27T03:22:40.368805762Z" level=info msg="metadata content store policy set" policy=shared May 27 03:22:40.376626 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:22:40.385203 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:22:40.386441 containerd[1566]: time="2025-05-27T03:22:40.386394023Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:22:40.386526 containerd[1566]: time="2025-05-27T03:22:40.386488912Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386517289Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386636646Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386662616Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386680722Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386702782Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386734916Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386753704Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386770992Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386787133Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386807817Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386963226Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.386998531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.387027928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:22:40.387261 containerd[1566]: time="2025-05-27T03:22:40.387055375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:22:40.389726 containerd[1566]: time="2025-05-27T03:22:40.387073163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:22:40.389726 containerd[1566]: time="2025-05-27T03:22:40.387097545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:22:40.389726 containerd[1566]: time="2025-05-27T03:22:40.387117260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:22:40.389726 containerd[1566]: time="2025-05-27T03:22:40.387133820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:22:40.389726 containerd[1566]: time="2025-05-27T03:22:40.387152845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:22:40.389726 containerd[1566]: time="2025-05-27T03:22:40.387172242Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:22:40.389726 containerd[1566]: time="2025-05-27T03:22:40.387189899Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:22:40.389726 containerd[1566]: time="2025-05-27T03:22:40.387294132Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:22:40.389726 containerd[1566]: time="2025-05-27T03:22:40.387317091Z" level=info msg="Start snapshots syncer" May 27 03:22:40.389726 containerd[1566]: time="2025-05-27T03:22:40.387449641Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:22:40.392338 containerd[1566]: time="2025-05-27T03:22:40.387806910Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:22:40.392338 containerd[1566]: time="2025-05-27T03:22:40.387896079Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:22:40.392595 containerd[1566]: time="2025-05-27T03:22:40.392545920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:22:40.393413 containerd[1566]: time="2025-05-27T03:22:40.392736324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:22:40.393413 containerd[1566]: time="2025-05-27T03:22:40.392782404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:22:40.393413 containerd[1566]: time="2025-05-27T03:22:40.392802771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:22:40.393413 containerd[1566]: time="2025-05-27T03:22:40.392823736Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:22:40.393413 containerd[1566]: time="2025-05-27T03:22:40.392851510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:22:40.393413 containerd[1566]: time="2025-05-27T03:22:40.392869882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:22:40.393413 containerd[1566]: time="2025-05-27T03:22:40.392888171Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:22:40.393413 containerd[1566]: time="2025-05-27T03:22:40.392929684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:22:40.393413 containerd[1566]: time="2025-05-27T03:22:40.392948375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:22:40.393413 containerd[1566]: time="2025-05-27T03:22:40.392978132Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:22:40.394823 containerd[1566]: time="2025-05-27T03:22:40.394222455Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:22:40.395799 containerd[1566]: time="2025-05-27T03:22:40.395446088Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:22:40.395799 containerd[1566]: time="2025-05-27T03:22:40.395476370Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:22:40.395799 containerd[1566]: time="2025-05-27T03:22:40.395495800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:22:40.395799 containerd[1566]: time="2025-05-27T03:22:40.395510660Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:22:40.395799 containerd[1566]: time="2025-05-27T03:22:40.395529010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:22:40.395799 containerd[1566]: time="2025-05-27T03:22:40.395547549Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:22:40.395799 containerd[1566]: time="2025-05-27T03:22:40.395574489Z" level=info msg="runtime interface created" May 27 03:22:40.395799 containerd[1566]: time="2025-05-27T03:22:40.395582455Z" level=info msg="created NRI interface" May 27 03:22:40.395799 containerd[1566]: time="2025-05-27T03:22:40.395595452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:22:40.395799 containerd[1566]: time="2025-05-27T03:22:40.395614126Z" level=info msg="Connect containerd service" May 27 03:22:40.396157 systemd-hostnamed[1623]: Hostname set to (transient) May 27 03:22:40.396624 containerd[1566]: time="2025-05-27T03:22:40.396328253Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:22:40.397320 systemd-resolved[1384]: System hostname changed to 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal'. May 27 03:22:40.401381 containerd[1566]: time="2025-05-27T03:22:40.400067312Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:22:40.522166 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:22:40.536095 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:22:40.545825 systemd[1]: Started sshd@0-10.128.0.39:22-139.178.68.195:45912.service - OpenSSH per-connection server daemon (139.178.68.195:45912). May 27 03:22:40.611704 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:22:40.614240 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:22:40.630243 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:22:40.723198 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:22:40.741590 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:22:40.757152 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:22:40.766988 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:22:40.894125 containerd[1566]: time="2025-05-27T03:22:40.894003505Z" level=info msg="Start subscribing containerd event" May 27 03:22:40.894125 containerd[1566]: time="2025-05-27T03:22:40.894096589Z" level=info msg="Start recovering state" May 27 03:22:40.894295 containerd[1566]: time="2025-05-27T03:22:40.894246309Z" level=info msg="Start event monitor" May 27 03:22:40.894295 containerd[1566]: time="2025-05-27T03:22:40.894268944Z" level=info msg="Start cni network conf syncer for default" May 27 03:22:40.894295 containerd[1566]: time="2025-05-27T03:22:40.894281309Z" level=info msg="Start streaming server" May 27 03:22:40.894430 containerd[1566]: time="2025-05-27T03:22:40.894297120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:22:40.894430 containerd[1566]: time="2025-05-27T03:22:40.894309075Z" level=info msg="runtime interface starting up..." May 27 03:22:40.894430 containerd[1566]: time="2025-05-27T03:22:40.894319711Z" level=info msg="starting plugins..." May 27 03:22:40.894430 containerd[1566]: time="2025-05-27T03:22:40.894339534Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:22:40.896231 containerd[1566]: time="2025-05-27T03:22:40.895946131Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:22:40.896231 containerd[1566]: time="2025-05-27T03:22:40.896048702Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:22:40.896231 containerd[1566]: time="2025-05-27T03:22:40.896120916Z" level=info msg="containerd successfully booted in 0.651434s" May 27 03:22:40.896595 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:22:41.063384 tar[1561]: linux-amd64/README.md May 27 03:22:41.089516 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:22:41.122423 sshd[1681]: Accepted publickey for core from 139.178.68.195 port 45912 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:22:41.129176 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:41.147178 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:22:41.158862 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:22:41.192874 systemd-logind[1551]: New session 1 of user core. May 27 03:22:41.206713 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:22:41.224265 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:22:41.262433 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:22:41.269845 systemd-logind[1551]: New session c1 of user core. May 27 03:22:41.399818 instance-setup[1654]: INFO Running google_set_multiqueue. May 27 03:22:41.426247 instance-setup[1654]: INFO Set channels for eth0 to 2. May 27 03:22:41.433491 instance-setup[1654]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. May 27 03:22:41.434627 instance-setup[1654]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 May 27 03:22:41.435121 instance-setup[1654]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. May 27 03:22:41.437094 instance-setup[1654]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 May 27 03:22:41.439397 instance-setup[1654]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. May 27 03:22:41.439656 instance-setup[1654]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 May 27 03:22:41.440138 instance-setup[1654]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. May 27 03:22:41.442107 instance-setup[1654]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 May 27 03:22:41.452330 instance-setup[1654]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type May 27 03:22:41.457522 instance-setup[1654]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type May 27 03:22:41.459979 instance-setup[1654]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus May 27 03:22:41.460164 instance-setup[1654]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus May 27 03:22:41.508245 init.sh[1646]: + /usr/bin/google_metadata_script_runner --script-type startup May 27 03:22:41.641824 systemd[1703]: Queued start job for default target default.target. May 27 03:22:41.647649 systemd[1703]: Created slice app.slice - User Application Slice. May 27 03:22:41.647705 systemd[1703]: Reached target paths.target - Paths. May 27 03:22:41.647774 systemd[1703]: Reached target timers.target - Timers. May 27 03:22:41.651324 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:22:41.682109 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:22:41.682313 systemd[1703]: Reached target sockets.target - Sockets. May 27 03:22:41.682409 systemd[1703]: Reached target basic.target - Basic System. May 27 03:22:41.682492 systemd[1703]: Reached target default.target - Main User Target. May 27 03:22:41.682565 systemd[1703]: Startup finished in 395ms. May 27 03:22:41.683173 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:22:41.704488 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:22:41.725799 startup-script[1737]: INFO Starting startup scripts. May 27 03:22:41.732853 startup-script[1737]: INFO No startup scripts found in metadata. May 27 03:22:41.732966 startup-script[1737]: INFO Finished running startup scripts. May 27 03:22:41.754630 init.sh[1646]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM May 27 03:22:41.754630 init.sh[1646]: + daemon_pids=() May 27 03:22:41.754630 init.sh[1646]: + for d in accounts clock_skew network May 27 03:22:41.754630 init.sh[1646]: + daemon_pids+=($!) May 27 03:22:41.754630 init.sh[1646]: + for d in accounts clock_skew network May 27 03:22:41.754906 init.sh[1646]: + daemon_pids+=($!) May 27 03:22:41.754950 init.sh[1646]: + for d in accounts clock_skew network May 27 03:22:41.755905 init.sh[1743]: + /usr/bin/google_accounts_daemon May 27 03:22:41.757386 init.sh[1744]: + /usr/bin/google_clock_skew_daemon May 27 03:22:41.757715 init.sh[1646]: + daemon_pids+=($!) May 27 03:22:41.757715 init.sh[1646]: + NOTIFY_SOCKET=/run/systemd/notify May 27 03:22:41.757715 init.sh[1646]: + /usr/bin/systemd-notify --ready May 27 03:22:41.758234 init.sh[1745]: + /usr/bin/google_network_daemon May 27 03:22:41.772628 systemd[1]: Started oem-gce.service - GCE Linux Agent. May 27 03:22:41.783189 init.sh[1646]: + wait -n 1743 1744 1745 May 27 03:22:41.960882 systemd[1]: Started sshd@1-10.128.0.39:22-139.178.68.195:45928.service - OpenSSH per-connection server daemon (139.178.68.195:45928). May 27 03:22:42.203280 google-clock-skew[1744]: INFO Starting Google Clock Skew daemon. May 27 03:22:42.215026 ntpd[1545]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:27%2]:123 May 27 03:22:42.216065 ntpd[1545]: 27 May 03:22:42 ntpd[1545]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:27%2]:123 May 27 03:22:42.217066 google-clock-skew[1744]: INFO Clock drift token has changed: 0. May 27 03:22:42.247900 google-networking[1745]: INFO Starting Google Networking daemon. May 27 03:22:42.293112 groupadd[1758]: group added to /etc/group: name=google-sudoers, GID=1000 May 27 03:22:42.296559 groupadd[1758]: group added to /etc/gshadow: name=google-sudoers May 27 03:22:42.326535 sshd[1749]: Accepted publickey for core from 139.178.68.195 port 45928 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:22:42.328940 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:42.338739 systemd-logind[1551]: New session 2 of user core. May 27 03:22:42.343829 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:22:42.352090 groupadd[1758]: new group: name=google-sudoers, GID=1000 May 27 03:22:42.384943 google-accounts[1743]: INFO Starting Google Accounts daemon. May 27 03:22:42.397259 google-accounts[1743]: WARNING OS Login not installed. May 27 03:22:42.399056 google-accounts[1743]: INFO Creating a new user account for 0. May 27 03:22:42.403344 init.sh[1768]: useradd: invalid user name '0': use --badname to ignore May 27 03:22:42.403808 google-accounts[1743]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. May 27 03:22:42.540536 sshd[1765]: Connection closed by 139.178.68.195 port 45928 May 27 03:22:42.541261 sshd-session[1749]: pam_unix(sshd:session): session closed for user core May 27 03:22:42.546114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:42.558871 systemd[1]: sshd@1-10.128.0.39:22-139.178.68.195:45928.service: Deactivated successfully. May 27 03:22:42.567968 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:42.570639 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:22:42.574796 systemd-logind[1551]: Session 2 logged out. Waiting for processes to exit. May 27 03:22:42.575806 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:22:42.585731 systemd[1]: Startup finished in 3.710s (kernel) + 10.134s (initrd) + 9.672s (userspace) = 23.517s. May 27 03:22:42.602706 systemd[1]: Started sshd@2-10.128.0.39:22-139.178.68.195:45944.service - OpenSSH per-connection server daemon (139.178.68.195:45944). May 27 03:22:42.605733 systemd-logind[1551]: Removed session 2. May 27 03:22:42.928242 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 45944 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:22:42.931172 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:42.939477 systemd-logind[1551]: New session 3 of user core. May 27 03:22:42.946641 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:22:43.000590 systemd-resolved[1384]: Clock change detected. Flushing caches. May 27 03:22:43.000971 google-clock-skew[1744]: INFO Synced system time with hardware clock. May 27 03:22:43.118628 sshd[1791]: Connection closed by 139.178.68.195 port 45944 May 27 03:22:43.120265 sshd-session[1781]: pam_unix(sshd:session): session closed for user core May 27 03:22:43.126655 systemd[1]: sshd@2-10.128.0.39:22-139.178.68.195:45944.service: Deactivated successfully. May 27 03:22:43.129782 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:22:43.133027 systemd-logind[1551]: Session 3 logged out. Waiting for processes to exit. May 27 03:22:43.135213 systemd-logind[1551]: Removed session 3. May 27 03:22:43.448991 kubelet[1775]: E0527 03:22:43.448914 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:43.451800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:43.452045 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:43.452595 systemd[1]: kubelet.service: Consumed 1.292s CPU time, 267.3M memory peak. May 27 03:22:53.188685 systemd[1]: Started sshd@3-10.128.0.39:22-139.178.68.195:51730.service - OpenSSH per-connection server daemon (139.178.68.195:51730). May 27 03:22:53.503207 sshd[1799]: Accepted publickey for core from 139.178.68.195 port 51730 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:22:53.505231 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:53.506653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:22:53.509363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:53.517927 systemd-logind[1551]: New session 4 of user core. May 27 03:22:53.525509 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:22:53.721880 sshd[1804]: Connection closed by 139.178.68.195 port 51730 May 27 03:22:53.722819 sshd-session[1799]: pam_unix(sshd:session): session closed for user core May 27 03:22:53.728386 systemd[1]: sshd@3-10.128.0.39:22-139.178.68.195:51730.service: Deactivated successfully. May 27 03:22:53.731647 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:22:53.734538 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. May 27 03:22:53.738195 systemd-logind[1551]: Removed session 4. May 27 03:22:53.777003 systemd[1]: Started sshd@4-10.128.0.39:22-139.178.68.195:34356.service - OpenSSH per-connection server daemon (139.178.68.195:34356). May 27 03:22:53.891198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:53.902937 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:53.962938 kubelet[1817]: E0527 03:22:53.962864 1817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:53.967246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:53.967526 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:53.968056 systemd[1]: kubelet.service: Consumed 218ms CPU time, 110.9M memory peak. May 27 03:22:54.098533 sshd[1810]: Accepted publickey for core from 139.178.68.195 port 34356 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:22:54.100115 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:54.108011 systemd-logind[1551]: New session 5 of user core. May 27 03:22:54.122595 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:22:54.309476 sshd[1824]: Connection closed by 139.178.68.195 port 34356 May 27 03:22:54.310364 sshd-session[1810]: pam_unix(sshd:session): session closed for user core May 27 03:22:54.316074 systemd[1]: sshd@4-10.128.0.39:22-139.178.68.195:34356.service: Deactivated successfully. May 27 03:22:54.318561 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:22:54.319737 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. May 27 03:22:54.321988 systemd-logind[1551]: Removed session 5. May 27 03:22:54.361561 systemd[1]: Started sshd@5-10.128.0.39:22-139.178.68.195:34362.service - OpenSSH per-connection server daemon (139.178.68.195:34362). May 27 03:22:54.680325 sshd[1830]: Accepted publickey for core from 139.178.68.195 port 34362 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:22:54.682228 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:54.689415 systemd-logind[1551]: New session 6 of user core. May 27 03:22:54.696554 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:22:54.893508 sshd[1832]: Connection closed by 139.178.68.195 port 34362 May 27 03:22:54.894457 sshd-session[1830]: pam_unix(sshd:session): session closed for user core May 27 03:22:54.900097 systemd[1]: sshd@5-10.128.0.39:22-139.178.68.195:34362.service: Deactivated successfully. May 27 03:22:54.902536 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:22:54.903900 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. May 27 03:22:54.906102 systemd-logind[1551]: Removed session 6. May 27 03:22:54.950608 systemd[1]: Started sshd@6-10.128.0.39:22-139.178.68.195:34374.service - OpenSSH per-connection server daemon (139.178.68.195:34374). May 27 03:22:55.270169 sshd[1838]: Accepted publickey for core from 139.178.68.195 port 34374 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:22:55.271712 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:55.279095 systemd-logind[1551]: New session 7 of user core. May 27 03:22:55.285556 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:22:55.464583 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:22:55.465053 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:55.480140 sudo[1841]: pam_unix(sudo:session): session closed for user root May 27 03:22:55.523036 sshd[1840]: Connection closed by 139.178.68.195 port 34374 May 27 03:22:55.524067 sshd-session[1838]: pam_unix(sshd:session): session closed for user core May 27 03:22:55.530006 systemd[1]: sshd@6-10.128.0.39:22-139.178.68.195:34374.service: Deactivated successfully. May 27 03:22:55.532619 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:22:55.533829 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. May 27 03:22:55.536038 systemd-logind[1551]: Removed session 7. May 27 03:22:55.580867 systemd[1]: Started sshd@7-10.128.0.39:22-139.178.68.195:34384.service - OpenSSH per-connection server daemon (139.178.68.195:34384). May 27 03:22:55.887141 sshd[1847]: Accepted publickey for core from 139.178.68.195 port 34384 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:22:55.889000 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:55.896414 systemd-logind[1551]: New session 8 of user core. May 27 03:22:55.903575 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:22:56.065881 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:22:56.066379 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:56.073068 sudo[1851]: pam_unix(sudo:session): session closed for user root May 27 03:22:56.086307 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:22:56.086785 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:56.099932 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:22:56.154472 augenrules[1873]: No rules May 27 03:22:56.155742 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:22:56.156081 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:22:56.158193 sudo[1850]: pam_unix(sudo:session): session closed for user root May 27 03:22:56.201431 sshd[1849]: Connection closed by 139.178.68.195 port 34384 May 27 03:22:56.202291 sshd-session[1847]: pam_unix(sshd:session): session closed for user core May 27 03:22:56.207560 systemd[1]: sshd@7-10.128.0.39:22-139.178.68.195:34384.service: Deactivated successfully. May 27 03:22:56.210141 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:22:56.212414 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. May 27 03:22:56.214701 systemd-logind[1551]: Removed session 8. May 27 03:22:56.259580 systemd[1]: Started sshd@8-10.128.0.39:22-139.178.68.195:34400.service - OpenSSH per-connection server daemon (139.178.68.195:34400). May 27 03:22:56.564178 sshd[1882]: Accepted publickey for core from 139.178.68.195 port 34400 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:22:56.566188 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:56.573415 systemd-logind[1551]: New session 9 of user core. May 27 03:22:56.580566 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:22:56.743652 sudo[1885]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:22:56.744142 sudo[1885]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:57.243837 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:22:57.258939 (dockerd)[1902]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:22:57.597838 dockerd[1902]: time="2025-05-27T03:22:57.597646202Z" level=info msg="Starting up" May 27 03:22:57.599979 dockerd[1902]: time="2025-05-27T03:22:57.599924556Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:22:57.690619 dockerd[1902]: time="2025-05-27T03:22:57.690557534Z" level=info msg="Loading containers: start." May 27 03:22:57.709530 kernel: Initializing XFRM netlink socket May 27 03:22:58.046135 systemd-networkd[1450]: docker0: Link UP May 27 03:22:58.053160 dockerd[1902]: time="2025-05-27T03:22:58.053077742Z" level=info msg="Loading containers: done." May 27 03:22:58.074396 dockerd[1902]: time="2025-05-27T03:22:58.073133443Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:22:58.074396 dockerd[1902]: time="2025-05-27T03:22:58.073258563Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:22:58.074396 dockerd[1902]: time="2025-05-27T03:22:58.073427862Z" level=info msg="Initializing buildkit" May 27 03:22:58.074419 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck393643306-merged.mount: Deactivated successfully. May 27 03:22:58.108378 dockerd[1902]: time="2025-05-27T03:22:58.108303511Z" level=info msg="Completed buildkit initialization" May 27 03:22:58.118974 dockerd[1902]: time="2025-05-27T03:22:58.118888301Z" level=info msg="Daemon has completed initialization" May 27 03:22:58.120043 dockerd[1902]: time="2025-05-27T03:22:58.119108348Z" level=info msg="API listen on /run/docker.sock" May 27 03:22:58.119277 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:22:59.007328 containerd[1566]: time="2025-05-27T03:22:59.007282099Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 03:22:59.528316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113286998.mount: Deactivated successfully. May 27 03:23:00.964634 containerd[1566]: time="2025-05-27T03:23:00.964563753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:00.966074 containerd[1566]: time="2025-05-27T03:23:00.966019912Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28804439" May 27 03:23:00.967146 containerd[1566]: time="2025-05-27T03:23:00.967068094Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:00.970148 containerd[1566]: time="2025-05-27T03:23:00.970087519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:00.971514 containerd[1566]: time="2025-05-27T03:23:00.971256293Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.963920745s" May 27 03:23:00.971514 containerd[1566]: time="2025-05-27T03:23:00.971305643Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 03:23:00.972197 containerd[1566]: time="2025-05-27T03:23:00.972162323Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 03:23:02.402448 containerd[1566]: time="2025-05-27T03:23:02.402376704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:02.403822 containerd[1566]: time="2025-05-27T03:23:02.403766196Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24784457" May 27 03:23:02.405217 containerd[1566]: time="2025-05-27T03:23:02.405140474Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:02.408619 containerd[1566]: time="2025-05-27T03:23:02.408555154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:02.409973 containerd[1566]: time="2025-05-27T03:23:02.409814493Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.437607617s" May 27 03:23:02.409973 containerd[1566]: time="2025-05-27T03:23:02.409858209Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 03:23:02.410655 containerd[1566]: time="2025-05-27T03:23:02.410617832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 03:23:03.553994 containerd[1566]: time="2025-05-27T03:23:03.553926872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:03.555182 containerd[1566]: time="2025-05-27T03:23:03.555127790Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19177979" May 27 03:23:03.556512 containerd[1566]: time="2025-05-27T03:23:03.556446748Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:03.559615 containerd[1566]: time="2025-05-27T03:23:03.559552386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:03.561144 containerd[1566]: time="2025-05-27T03:23:03.560945557Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.150282409s" May 27 03:23:03.561144 containerd[1566]: time="2025-05-27T03:23:03.560992431Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 03:23:03.561987 containerd[1566]: time="2025-05-27T03:23:03.561954068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 03:23:04.066059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 03:23:04.069588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:04.500566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:04.519061 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:23:04.596841 kubelet[2177]: E0527 03:23:04.596773 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:23:04.601036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:23:04.601638 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:23:04.602539 systemd[1]: kubelet.service: Consumed 250ms CPU time, 108.3M memory peak. May 27 03:23:04.778365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785316096.mount: Deactivated successfully. May 27 03:23:05.426319 containerd[1566]: time="2025-05-27T03:23:05.426252786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:05.427588 containerd[1566]: time="2025-05-27T03:23:05.427536435Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30894767" May 27 03:23:05.429076 containerd[1566]: time="2025-05-27T03:23:05.429004214Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:05.431645 containerd[1566]: time="2025-05-27T03:23:05.431579524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:05.432842 containerd[1566]: time="2025-05-27T03:23:05.432426239Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.87043353s" May 27 03:23:05.432842 containerd[1566]: time="2025-05-27T03:23:05.432477681Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 03:23:05.433109 containerd[1566]: time="2025-05-27T03:23:05.433069272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 03:23:05.806572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421683052.mount: Deactivated successfully. May 27 03:23:06.916099 containerd[1566]: time="2025-05-27T03:23:06.916031269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:06.917419 containerd[1566]: time="2025-05-27T03:23:06.917369098Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" May 27 03:23:06.918785 containerd[1566]: time="2025-05-27T03:23:06.918720433Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:06.921992 containerd[1566]: time="2025-05-27T03:23:06.921929561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:06.923513 containerd[1566]: time="2025-05-27T03:23:06.923327163Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.49021198s" May 27 03:23:06.923513 containerd[1566]: time="2025-05-27T03:23:06.923387365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 03:23:06.924526 containerd[1566]: time="2025-05-27T03:23:06.924436502Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:23:07.252619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1135182758.mount: Deactivated successfully. May 27 03:23:07.258044 containerd[1566]: time="2025-05-27T03:23:07.257987884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:07.259276 containerd[1566]: time="2025-05-27T03:23:07.259175298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" May 27 03:23:07.260671 containerd[1566]: time="2025-05-27T03:23:07.260607976Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:07.264270 containerd[1566]: time="2025-05-27T03:23:07.263761345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:07.264951 containerd[1566]: time="2025-05-27T03:23:07.264910982Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 340.437806ms" May 27 03:23:07.265331 containerd[1566]: time="2025-05-27T03:23:07.264954521Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:23:07.265654 containerd[1566]: time="2025-05-27T03:23:07.265606005Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 03:23:07.628003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428231872.mount: Deactivated successfully. May 27 03:23:09.791877 containerd[1566]: time="2025-05-27T03:23:09.791801049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:09.793412 containerd[1566]: time="2025-05-27T03:23:09.793356612Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57557924" May 27 03:23:09.794713 containerd[1566]: time="2025-05-27T03:23:09.794645330Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:09.798705 containerd[1566]: time="2025-05-27T03:23:09.798663855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:09.800885 containerd[1566]: time="2025-05-27T03:23:09.800535877Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.534888473s" May 27 03:23:09.800885 containerd[1566]: time="2025-05-27T03:23:09.800640568Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 03:23:10.398469 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 03:23:14.053466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:14.053907 systemd[1]: kubelet.service: Consumed 250ms CPU time, 108.3M memory peak. May 27 03:23:14.057170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:14.095132 systemd[1]: Reload requested from client PID 2330 ('systemctl') (unit session-9.scope)... May 27 03:23:14.095153 systemd[1]: Reloading... May 27 03:23:14.264377 zram_generator::config[2377]: No configuration found. May 27 03:23:14.406310 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:23:14.586947 systemd[1]: Reloading finished in 491 ms. May 27 03:23:14.707672 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:23:14.707818 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:23:14.708239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:14.708328 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.1M memory peak. May 27 03:23:14.715401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:15.603512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:15.616889 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:23:15.671235 kubelet[2424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:15.671718 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:23:15.671718 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:15.671718 kubelet[2424]: I0527 03:23:15.671409 2424 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:23:16.295098 kubelet[2424]: I0527 03:23:16.295031 2424 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:23:16.295098 kubelet[2424]: I0527 03:23:16.295070 2424 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:23:16.295541 kubelet[2424]: I0527 03:23:16.295515 2424 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:23:16.341498 kubelet[2424]: E0527 03:23:16.341441 2424 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.39:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:16.342760 kubelet[2424]: I0527 03:23:16.342580 2424 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:23:16.360889 kubelet[2424]: I0527 03:23:16.360864 2424 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:23:16.365374 kubelet[2424]: I0527 03:23:16.365305 2424 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:23:16.368171 kubelet[2424]: I0527 03:23:16.368098 2424 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:23:16.368435 kubelet[2424]: I0527 03:23:16.368154 2424 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:23:16.368647 kubelet[2424]: I0527 03:23:16.368444 2424 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:23:16.368647 kubelet[2424]: I0527 03:23:16.368463 2424 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:23:16.368647 kubelet[2424]: I0527 03:23:16.368622 2424 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:16.374299 kubelet[2424]: I0527 03:23:16.374211 2424 kubelet.go:446] "Attempting to sync node with API server" May 27 03:23:16.376944 kubelet[2424]: I0527 03:23:16.376529 2424 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:23:16.376944 kubelet[2424]: I0527 03:23:16.376586 2424 kubelet.go:352] "Adding apiserver pod source" May 27 03:23:16.376944 kubelet[2424]: I0527 03:23:16.376605 2424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:23:16.383560 kubelet[2424]: W0527 03:23:16.382596 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.39:6443: connect: connection refused May 27 03:23:16.383560 kubelet[2424]: E0527 03:23:16.382679 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.39:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:16.383560 kubelet[2424]: W0527 03:23:16.383167 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.39:6443: connect: connection refused May 27 03:23:16.383560 kubelet[2424]: E0527 03:23:16.383224 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.39:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:16.383819 kubelet[2424]: I0527 03:23:16.383756 2424 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:23:16.384263 kubelet[2424]: I0527 03:23:16.384218 2424 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:23:16.386364 kubelet[2424]: W0527 03:23:16.385438 2424 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:23:16.388570 kubelet[2424]: I0527 03:23:16.388548 2424 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:23:16.388721 kubelet[2424]: I0527 03:23:16.388708 2424 server.go:1287] "Started kubelet" May 27 03:23:16.390590 kubelet[2424]: I0527 03:23:16.389722 2424 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:23:16.391072 kubelet[2424]: I0527 03:23:16.391026 2424 server.go:479] "Adding debug handlers to kubelet server" May 27 03:23:16.394236 kubelet[2424]: I0527 03:23:16.394185 2424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:23:16.396014 kubelet[2424]: I0527 03:23:16.395945 2424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:23:16.396264 kubelet[2424]: I0527 03:23:16.396238 2424 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:23:16.401951 kubelet[2424]: E0527 03:23:16.398968 2424 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal.1843445150cf2fda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,UID:ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,},FirstTimestamp:2025-05-27 03:23:16.388679642 +0000 UTC m=+0.766488172,LastTimestamp:2025-05-27 03:23:16.388679642 +0000 UTC m=+0.766488172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,}" May 27 03:23:16.402532 kubelet[2424]: I0527 03:23:16.402513 2424 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:23:16.403039 kubelet[2424]: I0527 03:23:16.403013 2424 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:23:16.404133 kubelet[2424]: E0527 03:23:16.404094 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" May 27 03:23:16.405845 kubelet[2424]: I0527 03:23:16.405821 2424 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:23:16.406014 kubelet[2424]: I0527 03:23:16.405999 2424 reconciler.go:26] "Reconciler: start to sync state" May 27 03:23:16.407474 kubelet[2424]: W0527 03:23:16.407383 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.39:6443: connect: connection refused May 27 03:23:16.407751 kubelet[2424]: E0527 03:23:16.407722 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.39:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:16.408047 kubelet[2424]: E0527 03:23:16.408003 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.39:6443: connect: connection refused" interval="200ms" May 27 03:23:16.408389 kubelet[2424]: I0527 03:23:16.408368 2424 factory.go:221] Registration of the systemd container factory successfully May 27 03:23:16.408598 kubelet[2424]: I0527 03:23:16.408574 2424 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:23:16.412334 kubelet[2424]: E0527 03:23:16.411916 2424 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:23:16.412334 kubelet[2424]: I0527 03:23:16.412080 2424 factory.go:221] Registration of the containerd container factory successfully May 27 03:23:16.435137 kubelet[2424]: I0527 03:23:16.435073 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:23:16.437261 kubelet[2424]: I0527 03:23:16.437233 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:23:16.437448 kubelet[2424]: I0527 03:23:16.437421 2424 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:23:16.437867 kubelet[2424]: I0527 03:23:16.437550 2424 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:23:16.437867 kubelet[2424]: I0527 03:23:16.437568 2424 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:23:16.437867 kubelet[2424]: E0527 03:23:16.437633 2424 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:23:16.445948 kubelet[2424]: W0527 03:23:16.445915 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.39:6443: connect: connection refused May 27 03:23:16.446052 kubelet[2424]: E0527 03:23:16.445962 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.39:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:16.446399 kubelet[2424]: I0527 03:23:16.446323 2424 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:23:16.446547 kubelet[2424]: I0527 03:23:16.446530 2424 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:23:16.446911 kubelet[2424]: I0527 03:23:16.446637 2424 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:16.450012 kubelet[2424]: I0527 03:23:16.449994 2424 policy_none.go:49] "None policy: Start" May 27 03:23:16.450110 kubelet[2424]: I0527 03:23:16.450100 2424 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:23:16.450169 kubelet[2424]: I0527 03:23:16.450162 2424 state_mem.go:35] "Initializing new in-memory state store" May 27 03:23:16.458128 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:23:16.476241 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:23:16.481422 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:23:16.494450 kubelet[2424]: I0527 03:23:16.494399 2424 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:23:16.495363 kubelet[2424]: I0527 03:23:16.495324 2424 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:23:16.495535 kubelet[2424]: I0527 03:23:16.495374 2424 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:23:16.497821 kubelet[2424]: I0527 03:23:16.497545 2424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:23:16.498665 kubelet[2424]: E0527 03:23:16.498643 2424 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:23:16.498854 kubelet[2424]: E0527 03:23:16.498815 2424 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" May 27 03:23:16.559230 systemd[1]: Created slice kubepods-burstable-pod66076a3ccc8720f853f364791a6ce3c1.slice - libcontainer container kubepods-burstable-pod66076a3ccc8720f853f364791a6ce3c1.slice. May 27 03:23:16.575328 kubelet[2424]: E0527 03:23:16.575287 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.581776 systemd[1]: Created slice kubepods-burstable-pod9831c815e4e9d18da230dfc0a4aa4502.slice - libcontainer container kubepods-burstable-pod9831c815e4e9d18da230dfc0a4aa4502.slice. May 27 03:23:16.585046 kubelet[2424]: E0527 03:23:16.585011 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.588649 systemd[1]: Created slice kubepods-burstable-pod5a4c33103584c2ac40002dfff79beaa1.slice - libcontainer container kubepods-burstable-pod5a4c33103584c2ac40002dfff79beaa1.slice. May 27 03:23:16.591446 kubelet[2424]: E0527 03:23:16.591398 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.600917 kubelet[2424]: I0527 03:23:16.600837 2424 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.601426 kubelet[2424]: E0527 03:23:16.601332 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.39:6443/api/v1/nodes\": dial tcp 10.128.0.39:6443: connect: connection refused" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.609331 kubelet[2424]: E0527 03:23:16.609284 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.39:6443: connect: connection refused" interval="400ms" May 27 03:23:16.707827 kubelet[2424]: I0527 03:23:16.707712 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9831c815e4e9d18da230dfc0a4aa4502-kubeconfig\") pod \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"9831c815e4e9d18da230dfc0a4aa4502\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.707827 kubelet[2424]: I0527 03:23:16.707783 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66076a3ccc8720f853f364791a6ce3c1-k8s-certs\") pod \"kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"66076a3ccc8720f853f364791a6ce3c1\") " pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.707827 kubelet[2424]: I0527 03:23:16.707819 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9831c815e4e9d18da230dfc0a4aa4502-ca-certs\") pod \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"9831c815e4e9d18da230dfc0a4aa4502\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.708570 kubelet[2424]: I0527 03:23:16.707846 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66076a3ccc8720f853f364791a6ce3c1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"66076a3ccc8720f853f364791a6ce3c1\") " pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.708570 kubelet[2424]: I0527 03:23:16.707878 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9831c815e4e9d18da230dfc0a4aa4502-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"9831c815e4e9d18da230dfc0a4aa4502\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.708570 kubelet[2424]: I0527 03:23:16.707909 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9831c815e4e9d18da230dfc0a4aa4502-k8s-certs\") pod \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"9831c815e4e9d18da230dfc0a4aa4502\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.708570 kubelet[2424]: I0527 03:23:16.707944 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9831c815e4e9d18da230dfc0a4aa4502-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"9831c815e4e9d18da230dfc0a4aa4502\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.708706 kubelet[2424]: I0527 03:23:16.707972 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a4c33103584c2ac40002dfff79beaa1-kubeconfig\") pod \"kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"5a4c33103584c2ac40002dfff79beaa1\") " pod="kube-system/kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.708706 kubelet[2424]: I0527 03:23:16.707999 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66076a3ccc8720f853f364791a6ce3c1-ca-certs\") pod \"kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"66076a3ccc8720f853f364791a6ce3c1\") " pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.854503 kubelet[2424]: I0527 03:23:16.854355 2424 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.854967 kubelet[2424]: E0527 03:23:16.854906 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.39:6443/api/v1/nodes\": dial tcp 10.128.0.39:6443: connect: connection refused" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:16.877135 containerd[1566]: time="2025-05-27T03:23:16.877083646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,Uid:66076a3ccc8720f853f364791a6ce3c1,Namespace:kube-system,Attempt:0,}" May 27 03:23:16.886743 containerd[1566]: time="2025-05-27T03:23:16.886659120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,Uid:9831c815e4e9d18da230dfc0a4aa4502,Namespace:kube-system,Attempt:0,}" May 27 03:23:16.893409 containerd[1566]: time="2025-05-27T03:23:16.893284816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,Uid:5a4c33103584c2ac40002dfff79beaa1,Namespace:kube-system,Attempt:0,}" May 27 03:23:16.922881 containerd[1566]: time="2025-05-27T03:23:16.922821006Z" level=info msg="connecting to shim 528b431b2a48728233c4a1baab5524469daa7b6813fd9fbae9f47e654b96acd1" address="unix:///run/containerd/s/3d20269278eeeb22cbbb82268ea8a696aa6856303094c45dabf82d203f73c263" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:16.962633 containerd[1566]: time="2025-05-27T03:23:16.962539859Z" level=info msg="connecting to shim b236328e0519e95a592c6ed5ff5c778365499d224baf6e5fcfb6cac398932f5a" address="unix:///run/containerd/s/c88af6516de6f25218cc88b380b535f7f08cead105ff3be3809d7e3a8515363c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:16.972626 containerd[1566]: time="2025-05-27T03:23:16.972567858Z" level=info msg="connecting to shim 9e6ad45279f843323b45ebd0c37e24566269fb9ed2e33596a67fb121dd205807" address="unix:///run/containerd/s/a8addf2ec2bd8213bf27e62a31022f34c2f1078a1d95235288877762f0e465ef" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:17.009617 systemd[1]: Started cri-containerd-528b431b2a48728233c4a1baab5524469daa7b6813fd9fbae9f47e654b96acd1.scope - libcontainer container 528b431b2a48728233c4a1baab5524469daa7b6813fd9fbae9f47e654b96acd1. May 27 03:23:17.011041 kubelet[2424]: E0527 03:23:17.010271 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.39:6443: connect: connection refused" interval="800ms" May 27 03:23:17.040547 systemd[1]: Started cri-containerd-9e6ad45279f843323b45ebd0c37e24566269fb9ed2e33596a67fb121dd205807.scope - libcontainer container 9e6ad45279f843323b45ebd0c37e24566269fb9ed2e33596a67fb121dd205807. May 27 03:23:17.042575 systemd[1]: Started cri-containerd-b236328e0519e95a592c6ed5ff5c778365499d224baf6e5fcfb6cac398932f5a.scope - libcontainer container b236328e0519e95a592c6ed5ff5c778365499d224baf6e5fcfb6cac398932f5a. May 27 03:23:17.155135 containerd[1566]: time="2025-05-27T03:23:17.155089285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,Uid:66076a3ccc8720f853f364791a6ce3c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"528b431b2a48728233c4a1baab5524469daa7b6813fd9fbae9f47e654b96acd1\"" May 27 03:23:17.158949 containerd[1566]: time="2025-05-27T03:23:17.158295856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,Uid:5a4c33103584c2ac40002dfff79beaa1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e6ad45279f843323b45ebd0c37e24566269fb9ed2e33596a67fb121dd205807\"" May 27 03:23:17.161602 kubelet[2424]: E0527 03:23:17.161045 2424 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-21291" May 27 03:23:17.161602 kubelet[2424]: E0527 03:23:17.161301 2424 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-21291" May 27 03:23:17.164634 containerd[1566]: time="2025-05-27T03:23:17.164586834Z" level=info msg="CreateContainer within sandbox \"9e6ad45279f843323b45ebd0c37e24566269fb9ed2e33596a67fb121dd205807\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:23:17.165554 containerd[1566]: time="2025-05-27T03:23:17.165516131Z" level=info msg="CreateContainer within sandbox \"528b431b2a48728233c4a1baab5524469daa7b6813fd9fbae9f47e654b96acd1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:23:17.177372 containerd[1566]: time="2025-05-27T03:23:17.177115670Z" level=info msg="Container bff59fa701a4ec6bc06018a174dd524bc08db7c41eee1777cbc52b3bafc3d4ea: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:17.182319 containerd[1566]: time="2025-05-27T03:23:17.182286022Z" level=info msg="Container a45a8d14adaf8359c3cb17782546e25c2b9427e7d117566aec55f820d3686f5a: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:17.191196 containerd[1566]: time="2025-05-27T03:23:17.191159248Z" level=info msg="CreateContainer within sandbox \"9e6ad45279f843323b45ebd0c37e24566269fb9ed2e33596a67fb121dd205807\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bff59fa701a4ec6bc06018a174dd524bc08db7c41eee1777cbc52b3bafc3d4ea\"" May 27 03:23:17.192177 containerd[1566]: time="2025-05-27T03:23:17.192138407Z" level=info msg="StartContainer for \"bff59fa701a4ec6bc06018a174dd524bc08db7c41eee1777cbc52b3bafc3d4ea\"" May 27 03:23:17.194833 containerd[1566]: time="2025-05-27T03:23:17.194767371Z" level=info msg="connecting to shim bff59fa701a4ec6bc06018a174dd524bc08db7c41eee1777cbc52b3bafc3d4ea" address="unix:///run/containerd/s/a8addf2ec2bd8213bf27e62a31022f34c2f1078a1d95235288877762f0e465ef" protocol=ttrpc version=3 May 27 03:23:17.198930 containerd[1566]: time="2025-05-27T03:23:17.198876793Z" level=info msg="CreateContainer within sandbox \"528b431b2a48728233c4a1baab5524469daa7b6813fd9fbae9f47e654b96acd1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a45a8d14adaf8359c3cb17782546e25c2b9427e7d117566aec55f820d3686f5a\"" May 27 03:23:17.200360 containerd[1566]: time="2025-05-27T03:23:17.200272686Z" level=info msg="StartContainer for \"a45a8d14adaf8359c3cb17782546e25c2b9427e7d117566aec55f820d3686f5a\"" May 27 03:23:17.206767 containerd[1566]: time="2025-05-27T03:23:17.206707005Z" level=info msg="connecting to shim a45a8d14adaf8359c3cb17782546e25c2b9427e7d117566aec55f820d3686f5a" address="unix:///run/containerd/s/3d20269278eeeb22cbbb82268ea8a696aa6856303094c45dabf82d203f73c263" protocol=ttrpc version=3 May 27 03:23:17.211659 containerd[1566]: time="2025-05-27T03:23:17.211411896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,Uid:9831c815e4e9d18da230dfc0a4aa4502,Namespace:kube-system,Attempt:0,} returns sandbox id \"b236328e0519e95a592c6ed5ff5c778365499d224baf6e5fcfb6cac398932f5a\"" May 27 03:23:17.214221 kubelet[2424]: E0527 03:23:17.214085 2424 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flat" May 27 03:23:17.216221 containerd[1566]: time="2025-05-27T03:23:17.216179191Z" level=info msg="CreateContainer within sandbox \"b236328e0519e95a592c6ed5ff5c778365499d224baf6e5fcfb6cac398932f5a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:23:17.231614 containerd[1566]: time="2025-05-27T03:23:17.231563445Z" level=info msg="Container ae3a1cc90c8ea5afc9e595fcd45eb977c50f0ddd73e7cd27944571534581957a: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:17.233619 systemd[1]: Started cri-containerd-bff59fa701a4ec6bc06018a174dd524bc08db7c41eee1777cbc52b3bafc3d4ea.scope - libcontainer container bff59fa701a4ec6bc06018a174dd524bc08db7c41eee1777cbc52b3bafc3d4ea. May 27 03:23:17.250364 containerd[1566]: time="2025-05-27T03:23:17.250279856Z" level=info msg="CreateContainer within sandbox \"b236328e0519e95a592c6ed5ff5c778365499d224baf6e5fcfb6cac398932f5a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ae3a1cc90c8ea5afc9e595fcd45eb977c50f0ddd73e7cd27944571534581957a\"" May 27 03:23:17.251072 containerd[1566]: time="2025-05-27T03:23:17.250991552Z" level=info msg="StartContainer for \"ae3a1cc90c8ea5afc9e595fcd45eb977c50f0ddd73e7cd27944571534581957a\"" May 27 03:23:17.253016 systemd[1]: Started cri-containerd-a45a8d14adaf8359c3cb17782546e25c2b9427e7d117566aec55f820d3686f5a.scope - libcontainer container a45a8d14adaf8359c3cb17782546e25c2b9427e7d117566aec55f820d3686f5a. May 27 03:23:17.253803 containerd[1566]: time="2025-05-27T03:23:17.253729263Z" level=info msg="connecting to shim ae3a1cc90c8ea5afc9e595fcd45eb977c50f0ddd73e7cd27944571534581957a" address="unix:///run/containerd/s/c88af6516de6f25218cc88b380b535f7f08cead105ff3be3809d7e3a8515363c" protocol=ttrpc version=3 May 27 03:23:17.265607 kubelet[2424]: I0527 03:23:17.265085 2424 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:17.265607 kubelet[2424]: E0527 03:23:17.265517 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.39:6443/api/v1/nodes\": dial tcp 10.128.0.39:6443: connect: connection refused" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:17.304662 systemd[1]: Started cri-containerd-ae3a1cc90c8ea5afc9e595fcd45eb977c50f0ddd73e7cd27944571534581957a.scope - libcontainer container ae3a1cc90c8ea5afc9e595fcd45eb977c50f0ddd73e7cd27944571534581957a. May 27 03:23:17.398820 containerd[1566]: time="2025-05-27T03:23:17.398692322Z" level=info msg="StartContainer for \"bff59fa701a4ec6bc06018a174dd524bc08db7c41eee1777cbc52b3bafc3d4ea\" returns successfully" May 27 03:23:17.407759 containerd[1566]: time="2025-05-27T03:23:17.407446308Z" level=info msg="StartContainer for \"a45a8d14adaf8359c3cb17782546e25c2b9427e7d117566aec55f820d3686f5a\" returns successfully" May 27 03:23:17.411543 kubelet[2424]: W0527 03:23:17.411462 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.39:6443: connect: connection refused May 27 03:23:17.411543 kubelet[2424]: E0527 03:23:17.411525 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.39:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:17.423995 kubelet[2424]: W0527 03:23:17.423845 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.39:6443: connect: connection refused May 27 03:23:17.424114 kubelet[2424]: E0527 03:23:17.424019 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.39:6443: connect: connection refused" logger="UnhandledError" May 27 03:23:17.454803 containerd[1566]: time="2025-05-27T03:23:17.454640749Z" level=info msg="StartContainer for \"ae3a1cc90c8ea5afc9e595fcd45eb977c50f0ddd73e7cd27944571534581957a\" returns successfully" May 27 03:23:17.462827 kubelet[2424]: E0527 03:23:17.462785 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:17.484244 kubelet[2424]: E0527 03:23:17.484185 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:18.071724 kubelet[2424]: I0527 03:23:18.071664 2424 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:18.476807 kubelet[2424]: E0527 03:23:18.476762 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:18.477310 kubelet[2424]: E0527 03:23:18.477277 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:18.478196 kubelet[2424]: E0527 03:23:18.478160 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:19.477937 kubelet[2424]: E0527 03:23:19.477887 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:20.323615 kubelet[2424]: E0527 03:23:20.323562 2424 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:20.386099 kubelet[2424]: I0527 03:23:20.385899 2424 apiserver.go:52] "Watching apiserver" May 27 03:23:20.392700 kubelet[2424]: I0527 03:23:20.392332 2424 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:20.392700 kubelet[2424]: E0527 03:23:20.392390 2424 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\": node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" May 27 03:23:20.405557 kubelet[2424]: I0527 03:23:20.405513 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:20.406250 kubelet[2424]: I0527 03:23:20.406164 2424 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:23:20.500167 kubelet[2424]: E0527 03:23:20.500121 2424 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:20.501222 kubelet[2424]: I0527 03:23:20.500833 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:20.518362 kubelet[2424]: E0527 03:23:20.518193 2424 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:20.518362 kubelet[2424]: I0527 03:23:20.518232 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:20.531107 kubelet[2424]: E0527 03:23:20.531045 2424 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:22.591022 systemd[1]: Reload requested from client PID 2691 ('systemctl') (unit session-9.scope)... May 27 03:23:22.591046 systemd[1]: Reloading... May 27 03:23:22.708398 zram_generator::config[2731]: No configuration found. May 27 03:23:22.863121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:23:23.057154 systemd[1]: Reloading finished in 465 ms. May 27 03:23:23.101291 kubelet[2424]: I0527 03:23:23.101193 2424 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:23:23.101803 kubelet[2424]: E0527 03:23:23.101484 2424 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal.1843445150cf2fda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,UID:ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,},FirstTimestamp:2025-05-27 03:23:16.388679642 +0000 UTC m=+0.766488172,LastTimestamp:2025-05-27 03:23:16.388679642 +0000 UTC m=+0.766488172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal,}" May 27 03:23:23.102001 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:23.116863 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:23:23.117231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:23.117312 systemd[1]: kubelet.service: Consumed 1.280s CPU time, 129.7M memory peak. May 27 03:23:23.121985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:23.517246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:23.529473 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:23:23.600393 kubelet[2783]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:23.600393 kubelet[2783]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:23:23.600393 kubelet[2783]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:23.600393 kubelet[2783]: I0527 03:23:23.599804 2783 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:23:23.611584 kubelet[2783]: I0527 03:23:23.611430 2783 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:23:23.611584 kubelet[2783]: I0527 03:23:23.611459 2783 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:23:23.613933 kubelet[2783]: I0527 03:23:23.613895 2783 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:23:23.619764 kubelet[2783]: I0527 03:23:23.619544 2783 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 03:23:23.622974 kubelet[2783]: I0527 03:23:23.622937 2783 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:23:23.629439 kubelet[2783]: I0527 03:23:23.629417 2783 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:23:23.635362 kubelet[2783]: I0527 03:23:23.634496 2783 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:23:23.635362 kubelet[2783]: I0527 03:23:23.634781 2783 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:23:23.635362 kubelet[2783]: I0527 03:23:23.634828 2783 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:23:23.635362 kubelet[2783]: I0527 03:23:23.635095 2783 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:23:23.635616 kubelet[2783]: I0527 03:23:23.635109 2783 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:23:23.635616 kubelet[2783]: I0527 03:23:23.635170 2783 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:23.635770 kubelet[2783]: I0527 03:23:23.635720 2783 kubelet.go:446] "Attempting to sync node with API server" May 27 03:23:23.635843 kubelet[2783]: I0527 03:23:23.635799 2783 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:23:23.637313 kubelet[2783]: I0527 03:23:23.635934 2783 kubelet.go:352] "Adding apiserver pod source" May 27 03:23:23.637313 kubelet[2783]: I0527 03:23:23.635959 2783 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:23:23.637313 kubelet[2783]: I0527 03:23:23.636872 2783 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:23:23.637526 kubelet[2783]: I0527 03:23:23.637513 2783 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:23:23.638148 kubelet[2783]: I0527 03:23:23.638125 2783 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:23:23.638276 kubelet[2783]: I0527 03:23:23.638171 2783 server.go:1287] "Started kubelet" May 27 03:23:23.643787 kubelet[2783]: I0527 03:23:23.643655 2783 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:23:23.649230 kubelet[2783]: I0527 03:23:23.649178 2783 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:23:23.655365 kubelet[2783]: I0527 03:23:23.655259 2783 server.go:479] "Adding debug handlers to kubelet server" May 27 03:23:23.657750 kubelet[2783]: I0527 03:23:23.656091 2783 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:23:23.666840 kubelet[2783]: I0527 03:23:23.666749 2783 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:23:23.668420 kubelet[2783]: E0527 03:23:23.668395 2783 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" not found" May 27 03:23:23.670381 kubelet[2783]: I0527 03:23:23.669289 2783 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:23:23.670680 kubelet[2783]: I0527 03:23:23.670649 2783 reconciler.go:26] "Reconciler: start to sync state" May 27 03:23:23.676323 kubelet[2783]: I0527 03:23:23.676249 2783 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:23:23.678397 kubelet[2783]: I0527 03:23:23.677540 2783 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:23:23.689760 kubelet[2783]: I0527 03:23:23.689682 2783 factory.go:221] Registration of the systemd container factory successfully May 27 03:23:23.689876 kubelet[2783]: I0527 03:23:23.689802 2783 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:23:23.697594 kubelet[2783]: E0527 03:23:23.697516 2783 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:23:23.702777 kubelet[2783]: I0527 03:23:23.702736 2783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:23:23.704883 kubelet[2783]: I0527 03:23:23.704677 2783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:23:23.704883 kubelet[2783]: I0527 03:23:23.704706 2783 factory.go:221] Registration of the containerd container factory successfully May 27 03:23:23.717102 kubelet[2783]: I0527 03:23:23.704714 2783 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:23:23.717552 kubelet[2783]: I0527 03:23:23.717441 2783 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:23:23.717720 kubelet[2783]: I0527 03:23:23.717703 2783 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:23:23.717887 kubelet[2783]: E0527 03:23:23.717860 2783 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:23:23.796399 kubelet[2783]: I0527 03:23:23.795840 2783 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:23:23.797736 kubelet[2783]: I0527 03:23:23.797702 2783 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:23:23.797853 kubelet[2783]: I0527 03:23:23.797743 2783 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:23.798004 kubelet[2783]: I0527 03:23:23.797977 2783 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:23:23.798069 kubelet[2783]: I0527 03:23:23.798004 2783 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:23:23.798069 kubelet[2783]: I0527 03:23:23.798036 2783 policy_none.go:49] "None policy: Start" May 27 03:23:23.798069 kubelet[2783]: I0527 03:23:23.798052 2783 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:23:23.798069 kubelet[2783]: I0527 03:23:23.798069 2783 state_mem.go:35] "Initializing new in-memory state store" May 27 03:23:23.798282 kubelet[2783]: I0527 03:23:23.798258 2783 state_mem.go:75] "Updated machine memory state" May 27 03:23:23.805608 kubelet[2783]: I0527 03:23:23.805583 2783 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:23:23.805952 kubelet[2783]: I0527 03:23:23.805933 2783 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:23:23.806081 kubelet[2783]: I0527 03:23:23.806044 2783 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:23:23.807092 kubelet[2783]: I0527 03:23:23.807072 2783 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:23:23.810026 kubelet[2783]: E0527 03:23:23.809998 2783 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:23:23.818820 kubelet[2783]: I0527 03:23:23.818775 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.820215 kubelet[2783]: I0527 03:23:23.820145 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.822231 kubelet[2783]: I0527 03:23:23.821282 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.831623 kubelet[2783]: W0527 03:23:23.831560 2783 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] May 27 03:23:23.832114 kubelet[2783]: W0527 03:23:23.831900 2783 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] May 27 03:23:23.838896 kubelet[2783]: W0527 03:23:23.837473 2783 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] May 27 03:23:23.872284 kubelet[2783]: I0527 03:23:23.871917 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66076a3ccc8720f853f364791a6ce3c1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"66076a3ccc8720f853f364791a6ce3c1\") " pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.872284 kubelet[2783]: I0527 03:23:23.871970 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9831c815e4e9d18da230dfc0a4aa4502-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"9831c815e4e9d18da230dfc0a4aa4502\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.872284 kubelet[2783]: I0527 03:23:23.872005 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9831c815e4e9d18da230dfc0a4aa4502-kubeconfig\") pod \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"9831c815e4e9d18da230dfc0a4aa4502\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.872284 kubelet[2783]: I0527 03:23:23.872041 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9831c815e4e9d18da230dfc0a4aa4502-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"9831c815e4e9d18da230dfc0a4aa4502\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.872602 kubelet[2783]: I0527 03:23:23.872072 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66076a3ccc8720f853f364791a6ce3c1-ca-certs\") pod \"kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"66076a3ccc8720f853f364791a6ce3c1\") " pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.872602 kubelet[2783]: I0527 03:23:23.872104 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66076a3ccc8720f853f364791a6ce3c1-k8s-certs\") pod \"kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"66076a3ccc8720f853f364791a6ce3c1\") " pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.872602 kubelet[2783]: I0527 03:23:23.872133 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9831c815e4e9d18da230dfc0a4aa4502-ca-certs\") pod \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"9831c815e4e9d18da230dfc0a4aa4502\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.872602 kubelet[2783]: I0527 03:23:23.872162 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9831c815e4e9d18da230dfc0a4aa4502-k8s-certs\") pod \"kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"9831c815e4e9d18da230dfc0a4aa4502\") " pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.872806 kubelet[2783]: I0527 03:23:23.872193 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a4c33103584c2ac40002dfff79beaa1-kubeconfig\") pod \"kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" (UID: \"5a4c33103584c2ac40002dfff79beaa1\") " pod="kube-system/kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.924263 kubelet[2783]: I0527 03:23:23.924203 2783 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.934956 kubelet[2783]: I0527 03:23:23.934899 2783 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:23.935127 kubelet[2783]: I0527 03:23:23.935000 2783 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:24.659565 kubelet[2783]: I0527 03:23:24.659473 2783 apiserver.go:52] "Watching apiserver" May 27 03:23:24.671519 kubelet[2783]: I0527 03:23:24.671481 2783 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:23:24.767384 kubelet[2783]: I0527 03:23:24.765922 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:24.781426 kubelet[2783]: W0527 03:23:24.781388 2783 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] May 27 03:23:24.781604 kubelet[2783]: E0527 03:23:24.781571 2783 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:23:24.821642 kubelet[2783]: I0527 03:23:24.821231 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" podStartSLOduration=1.821207197 podStartE2EDuration="1.821207197s" podCreationTimestamp="2025-05-27 03:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:24.804549565 +0000 UTC m=+1.268582170" watchObservedRunningTime="2025-05-27 03:23:24.821207197 +0000 UTC m=+1.285239805" May 27 03:23:24.835608 kubelet[2783]: I0527 03:23:24.835538 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" podStartSLOduration=1.835514121 podStartE2EDuration="1.835514121s" podCreationTimestamp="2025-05-27 03:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:24.822706388 +0000 UTC m=+1.286738997" watchObservedRunningTime="2025-05-27 03:23:24.835514121 +0000 UTC m=+1.299546731" May 27 03:23:25.045580 update_engine[1554]: I20250527 03:23:25.045408 1554 update_attempter.cc:509] Updating boot flags... May 27 03:23:27.051369 kubelet[2783]: I0527 03:23:27.051276 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" podStartSLOduration=4.050970142 podStartE2EDuration="4.050970142s" podCreationTimestamp="2025-05-27 03:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:24.835826771 +0000 UTC m=+1.299859384" watchObservedRunningTime="2025-05-27 03:23:27.050970142 +0000 UTC m=+3.515002748" May 27 03:23:29.010308 kubelet[2783]: I0527 03:23:29.010125 2783 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:23:29.011938 containerd[1566]: time="2025-05-27T03:23:29.011876930Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:23:29.014377 kubelet[2783]: I0527 03:23:29.013131 2783 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:23:29.969369 systemd[1]: Created slice kubepods-besteffort-pod1d6c7ab6_aea6_4aa2_93a3_78cb3bbc69be.slice - libcontainer container kubepods-besteffort-pod1d6c7ab6_aea6_4aa2_93a3_78cb3bbc69be.slice. May 27 03:23:30.015154 kubelet[2783]: I0527 03:23:30.015102 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d6c7ab6-aea6-4aa2-93a3-78cb3bbc69be-xtables-lock\") pod \"kube-proxy-g8mbs\" (UID: \"1d6c7ab6-aea6-4aa2-93a3-78cb3bbc69be\") " pod="kube-system/kube-proxy-g8mbs" May 27 03:23:30.015786 kubelet[2783]: I0527 03:23:30.015201 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d6c7ab6-aea6-4aa2-93a3-78cb3bbc69be-kube-proxy\") pod \"kube-proxy-g8mbs\" (UID: \"1d6c7ab6-aea6-4aa2-93a3-78cb3bbc69be\") " pod="kube-system/kube-proxy-g8mbs" May 27 03:23:30.015786 kubelet[2783]: I0527 03:23:30.015231 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d6c7ab6-aea6-4aa2-93a3-78cb3bbc69be-lib-modules\") pod \"kube-proxy-g8mbs\" (UID: \"1d6c7ab6-aea6-4aa2-93a3-78cb3bbc69be\") " pod="kube-system/kube-proxy-g8mbs" May 27 03:23:30.015786 kubelet[2783]: I0527 03:23:30.015255 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srdzb\" (UniqueName: \"kubernetes.io/projected/1d6c7ab6-aea6-4aa2-93a3-78cb3bbc69be-kube-api-access-srdzb\") pod \"kube-proxy-g8mbs\" (UID: \"1d6c7ab6-aea6-4aa2-93a3-78cb3bbc69be\") " pod="kube-system/kube-proxy-g8mbs" May 27 03:23:30.161978 systemd[1]: Created slice kubepods-besteffort-pode145e5f2_6c5c_40f8_9030_d8e959cc1ccd.slice - libcontainer container kubepods-besteffort-pode145e5f2_6c5c_40f8_9030_d8e959cc1ccd.slice. May 27 03:23:30.216629 kubelet[2783]: I0527 03:23:30.216546 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll462\" (UniqueName: \"kubernetes.io/projected/e145e5f2-6c5c-40f8-9030-d8e959cc1ccd-kube-api-access-ll462\") pod \"tigera-operator-844669ff44-mk9rs\" (UID: \"e145e5f2-6c5c-40f8-9030-d8e959cc1ccd\") " pod="tigera-operator/tigera-operator-844669ff44-mk9rs" May 27 03:23:30.216629 kubelet[2783]: I0527 03:23:30.216615 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e145e5f2-6c5c-40f8-9030-d8e959cc1ccd-var-lib-calico\") pod \"tigera-operator-844669ff44-mk9rs\" (UID: \"e145e5f2-6c5c-40f8-9030-d8e959cc1ccd\") " pod="tigera-operator/tigera-operator-844669ff44-mk9rs" May 27 03:23:30.283174 containerd[1566]: time="2025-05-27T03:23:30.282592455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g8mbs,Uid:1d6c7ab6-aea6-4aa2-93a3-78cb3bbc69be,Namespace:kube-system,Attempt:0,}" May 27 03:23:30.312723 containerd[1566]: time="2025-05-27T03:23:30.312667946Z" level=info msg="connecting to shim 08356be6b80b3694e09da50fc57a9c0ff619f9b33176b170be71c278adb14530" address="unix:///run/containerd/s/66041681e14807e171487ab108e9ed943988a8d5a2c27c40a731f127c3292a70" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:30.366546 systemd[1]: Started cri-containerd-08356be6b80b3694e09da50fc57a9c0ff619f9b33176b170be71c278adb14530.scope - libcontainer container 08356be6b80b3694e09da50fc57a9c0ff619f9b33176b170be71c278adb14530. May 27 03:23:30.404847 containerd[1566]: time="2025-05-27T03:23:30.404718552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g8mbs,Uid:1d6c7ab6-aea6-4aa2-93a3-78cb3bbc69be,Namespace:kube-system,Attempt:0,} returns sandbox id \"08356be6b80b3694e09da50fc57a9c0ff619f9b33176b170be71c278adb14530\"" May 27 03:23:30.409551 containerd[1566]: time="2025-05-27T03:23:30.409505567Z" level=info msg="CreateContainer within sandbox \"08356be6b80b3694e09da50fc57a9c0ff619f9b33176b170be71c278adb14530\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:23:30.425056 containerd[1566]: time="2025-05-27T03:23:30.425016748Z" level=info msg="Container 350416ef812996edfc9ecd4f3b11a714a404b57cd7f1231c83a116523b985475: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:30.436383 containerd[1566]: time="2025-05-27T03:23:30.436309886Z" level=info msg="CreateContainer within sandbox \"08356be6b80b3694e09da50fc57a9c0ff619f9b33176b170be71c278adb14530\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"350416ef812996edfc9ecd4f3b11a714a404b57cd7f1231c83a116523b985475\"" May 27 03:23:30.437268 containerd[1566]: time="2025-05-27T03:23:30.437164840Z" level=info msg="StartContainer for \"350416ef812996edfc9ecd4f3b11a714a404b57cd7f1231c83a116523b985475\"" May 27 03:23:30.439434 containerd[1566]: time="2025-05-27T03:23:30.439386329Z" level=info msg="connecting to shim 350416ef812996edfc9ecd4f3b11a714a404b57cd7f1231c83a116523b985475" address="unix:///run/containerd/s/66041681e14807e171487ab108e9ed943988a8d5a2c27c40a731f127c3292a70" protocol=ttrpc version=3 May 27 03:23:30.463554 systemd[1]: Started cri-containerd-350416ef812996edfc9ecd4f3b11a714a404b57cd7f1231c83a116523b985475.scope - libcontainer container 350416ef812996edfc9ecd4f3b11a714a404b57cd7f1231c83a116523b985475. May 27 03:23:30.466963 containerd[1566]: time="2025-05-27T03:23:30.466774605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-mk9rs,Uid:e145e5f2-6c5c-40f8-9030-d8e959cc1ccd,Namespace:tigera-operator,Attempt:0,}" May 27 03:23:30.499980 containerd[1566]: time="2025-05-27T03:23:30.499670628Z" level=info msg="connecting to shim d137a380dd48d7ecdd58c9c264369af6812c7e1724748d2d50cae7db1f396abc" address="unix:///run/containerd/s/b44c40c901b1903a4cbe1b283a5356eddc31a55b67cc723d4ee92671f77c9bbc" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:30.545834 systemd[1]: Started cri-containerd-d137a380dd48d7ecdd58c9c264369af6812c7e1724748d2d50cae7db1f396abc.scope - libcontainer container d137a380dd48d7ecdd58c9c264369af6812c7e1724748d2d50cae7db1f396abc. May 27 03:23:30.556184 containerd[1566]: time="2025-05-27T03:23:30.556123120Z" level=info msg="StartContainer for \"350416ef812996edfc9ecd4f3b11a714a404b57cd7f1231c83a116523b985475\" returns successfully" May 27 03:23:30.637614 containerd[1566]: time="2025-05-27T03:23:30.637493829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-mk9rs,Uid:e145e5f2-6c5c-40f8-9030-d8e959cc1ccd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d137a380dd48d7ecdd58c9c264369af6812c7e1724748d2d50cae7db1f396abc\"" May 27 03:23:30.641534 containerd[1566]: time="2025-05-27T03:23:30.641494708Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:23:31.141031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150282787.mount: Deactivated successfully. May 27 03:23:31.504242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084604929.mount: Deactivated successfully. May 27 03:23:32.596567 containerd[1566]: time="2025-05-27T03:23:32.596496164Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:32.598256 containerd[1566]: time="2025-05-27T03:23:32.598181363Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:23:32.599682 containerd[1566]: time="2025-05-27T03:23:32.599617063Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:32.602416 containerd[1566]: time="2025-05-27T03:23:32.602353371Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:32.603671 containerd[1566]: time="2025-05-27T03:23:32.603312611Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.961719543s" May 27 03:23:32.603671 containerd[1566]: time="2025-05-27T03:23:32.603372966Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:23:32.606397 containerd[1566]: time="2025-05-27T03:23:32.606360406Z" level=info msg="CreateContainer within sandbox \"d137a380dd48d7ecdd58c9c264369af6812c7e1724748d2d50cae7db1f396abc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:23:32.618011 containerd[1566]: time="2025-05-27T03:23:32.617970866Z" level=info msg="Container 485d550233d5ebf219f0ecbe86db431e04ce7050c3a2a7dfb66bbdeb1094a6d8: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:32.626231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941964493.mount: Deactivated successfully. May 27 03:23:32.631301 containerd[1566]: time="2025-05-27T03:23:32.631250314Z" level=info msg="CreateContainer within sandbox \"d137a380dd48d7ecdd58c9c264369af6812c7e1724748d2d50cae7db1f396abc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"485d550233d5ebf219f0ecbe86db431e04ce7050c3a2a7dfb66bbdeb1094a6d8\"" May 27 03:23:32.632164 containerd[1566]: time="2025-05-27T03:23:32.632131570Z" level=info msg="StartContainer for \"485d550233d5ebf219f0ecbe86db431e04ce7050c3a2a7dfb66bbdeb1094a6d8\"" May 27 03:23:32.634237 containerd[1566]: time="2025-05-27T03:23:32.634165509Z" level=info msg="connecting to shim 485d550233d5ebf219f0ecbe86db431e04ce7050c3a2a7dfb66bbdeb1094a6d8" address="unix:///run/containerd/s/b44c40c901b1903a4cbe1b283a5356eddc31a55b67cc723d4ee92671f77c9bbc" protocol=ttrpc version=3 May 27 03:23:32.669582 systemd[1]: Started cri-containerd-485d550233d5ebf219f0ecbe86db431e04ce7050c3a2a7dfb66bbdeb1094a6d8.scope - libcontainer container 485d550233d5ebf219f0ecbe86db431e04ce7050c3a2a7dfb66bbdeb1094a6d8. May 27 03:23:32.709984 containerd[1566]: time="2025-05-27T03:23:32.709884270Z" level=info msg="StartContainer for \"485d550233d5ebf219f0ecbe86db431e04ce7050c3a2a7dfb66bbdeb1094a6d8\" returns successfully" May 27 03:23:32.802326 kubelet[2783]: I0527 03:23:32.801865 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g8mbs" podStartSLOduration=3.801840251 podStartE2EDuration="3.801840251s" podCreationTimestamp="2025-05-27 03:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:30.796497165 +0000 UTC m=+7.260529775" watchObservedRunningTime="2025-05-27 03:23:32.801840251 +0000 UTC m=+9.265872857" May 27 03:23:33.740741 kubelet[2783]: I0527 03:23:33.740572 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-mk9rs" podStartSLOduration=1.775878263 podStartE2EDuration="3.740514192s" podCreationTimestamp="2025-05-27 03:23:30 +0000 UTC" firstStartedPulling="2025-05-27 03:23:30.640037749 +0000 UTC m=+7.104070339" lastFinishedPulling="2025-05-27 03:23:32.604673668 +0000 UTC m=+9.068706268" observedRunningTime="2025-05-27 03:23:32.802765328 +0000 UTC m=+9.266797937" watchObservedRunningTime="2025-05-27 03:23:33.740514192 +0000 UTC m=+10.204546802" May 27 03:23:40.221147 sudo[1885]: pam_unix(sudo:session): session closed for user root May 27 03:23:40.265109 sshd[1884]: Connection closed by 139.178.68.195 port 34400 May 27 03:23:40.266699 sshd-session[1882]: pam_unix(sshd:session): session closed for user core May 27 03:23:40.280178 systemd[1]: sshd@8-10.128.0.39:22-139.178.68.195:34400.service: Deactivated successfully. May 27 03:23:40.286522 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:23:40.286858 systemd[1]: session-9.scope: Consumed 7.175s CPU time, 229.2M memory peak. May 27 03:23:40.291843 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. May 27 03:23:40.296513 systemd-logind[1551]: Removed session 9. May 27 03:23:45.957679 systemd[1]: Created slice kubepods-besteffort-poda91036ee_ebb1_4fb3_ab1c_8abc7aba3c56.slice - libcontainer container kubepods-besteffort-poda91036ee_ebb1_4fb3_ab1c_8abc7aba3c56.slice. May 27 03:23:46.017098 kubelet[2783]: I0527 03:23:46.017025 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a91036ee-ebb1-4fb3-ab1c-8abc7aba3c56-tigera-ca-bundle\") pod \"calico-typha-788f9bcf5b-ztvtt\" (UID: \"a91036ee-ebb1-4fb3-ab1c-8abc7aba3c56\") " pod="calico-system/calico-typha-788f9bcf5b-ztvtt" May 27 03:23:46.017652 kubelet[2783]: I0527 03:23:46.017135 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a91036ee-ebb1-4fb3-ab1c-8abc7aba3c56-typha-certs\") pod \"calico-typha-788f9bcf5b-ztvtt\" (UID: \"a91036ee-ebb1-4fb3-ab1c-8abc7aba3c56\") " pod="calico-system/calico-typha-788f9bcf5b-ztvtt" May 27 03:23:46.017652 kubelet[2783]: I0527 03:23:46.017214 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l6nd\" (UniqueName: \"kubernetes.io/projected/a91036ee-ebb1-4fb3-ab1c-8abc7aba3c56-kube-api-access-8l6nd\") pod \"calico-typha-788f9bcf5b-ztvtt\" (UID: \"a91036ee-ebb1-4fb3-ab1c-8abc7aba3c56\") " pod="calico-system/calico-typha-788f9bcf5b-ztvtt" May 27 03:23:46.269537 containerd[1566]: time="2025-05-27T03:23:46.268771672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-788f9bcf5b-ztvtt,Uid:a91036ee-ebb1-4fb3-ab1c-8abc7aba3c56,Namespace:calico-system,Attempt:0,}" May 27 03:23:46.320007 kubelet[2783]: I0527 03:23:46.319926 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7942a646-4de0-4447-8d88-7ad37451ec5d-lib-modules\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.320007 kubelet[2783]: I0527 03:23:46.319985 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7942a646-4de0-4447-8d88-7ad37451ec5d-policysync\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.320007 kubelet[2783]: I0527 03:23:46.320014 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7942a646-4de0-4447-8d88-7ad37451ec5d-var-lib-calico\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.321879 kubelet[2783]: I0527 03:23:46.320044 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7942a646-4de0-4447-8d88-7ad37451ec5d-cni-bin-dir\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.321879 kubelet[2783]: I0527 03:23:46.320073 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7942a646-4de0-4447-8d88-7ad37451ec5d-tigera-ca-bundle\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.321879 kubelet[2783]: I0527 03:23:46.320130 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7942a646-4de0-4447-8d88-7ad37451ec5d-cni-log-dir\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.321879 kubelet[2783]: I0527 03:23:46.320195 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7942a646-4de0-4447-8d88-7ad37451ec5d-var-run-calico\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.321879 kubelet[2783]: I0527 03:23:46.321406 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zml6\" (UniqueName: \"kubernetes.io/projected/7942a646-4de0-4447-8d88-7ad37451ec5d-kube-api-access-9zml6\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.322136 kubelet[2783]: I0527 03:23:46.321491 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7942a646-4de0-4447-8d88-7ad37451ec5d-cni-net-dir\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.322136 kubelet[2783]: I0527 03:23:46.321570 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7942a646-4de0-4447-8d88-7ad37451ec5d-flexvol-driver-host\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.322136 kubelet[2783]: I0527 03:23:46.321602 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7942a646-4de0-4447-8d88-7ad37451ec5d-node-certs\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.327309 kubelet[2783]: I0527 03:23:46.326281 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7942a646-4de0-4447-8d88-7ad37451ec5d-xtables-lock\") pod \"calico-node-rkqsc\" (UID: \"7942a646-4de0-4447-8d88-7ad37451ec5d\") " pod="calico-system/calico-node-rkqsc" May 27 03:23:46.327018 systemd[1]: Created slice kubepods-besteffort-pod7942a646_4de0_4447_8d88_7ad37451ec5d.slice - libcontainer container kubepods-besteffort-pod7942a646_4de0_4447_8d88_7ad37451ec5d.slice. May 27 03:23:46.343527 containerd[1566]: time="2025-05-27T03:23:46.343470139Z" level=info msg="connecting to shim 4c19a88d3f1b4be168d98a957024836e3220fc4a3c1dee84dbe4e46d6c624bae" address="unix:///run/containerd/s/57aa96137733c65b0f4a3fb8686b67d7bdc7b9a63d9420c42fe0403120337328" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:46.403958 systemd[1]: Started cri-containerd-4c19a88d3f1b4be168d98a957024836e3220fc4a3c1dee84dbe4e46d6c624bae.scope - libcontainer container 4c19a88d3f1b4be168d98a957024836e3220fc4a3c1dee84dbe4e46d6c624bae. May 27 03:23:46.450629 kubelet[2783]: E0527 03:23:46.450584 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.450629 kubelet[2783]: W0527 03:23:46.450617 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.450884 kubelet[2783]: E0527 03:23:46.450659 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.477729 kubelet[2783]: E0527 03:23:46.477694 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.477729 kubelet[2783]: W0527 03:23:46.477722 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.477969 kubelet[2783]: E0527 03:23:46.477748 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.530726 containerd[1566]: time="2025-05-27T03:23:46.530590902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-788f9bcf5b-ztvtt,Uid:a91036ee-ebb1-4fb3-ab1c-8abc7aba3c56,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c19a88d3f1b4be168d98a957024836e3220fc4a3c1dee84dbe4e46d6c624bae\"" May 27 03:23:46.535588 containerd[1566]: time="2025-05-27T03:23:46.535469722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:23:46.642450 containerd[1566]: time="2025-05-27T03:23:46.642402956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rkqsc,Uid:7942a646-4de0-4447-8d88-7ad37451ec5d,Namespace:calico-system,Attempt:0,}" May 27 03:23:46.652662 kubelet[2783]: E0527 03:23:46.650559 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r5t8w" podUID="dca63bae-90e6-4118-afb8-b162c7c3ea6d" May 27 03:23:46.691086 containerd[1566]: time="2025-05-27T03:23:46.690999824Z" level=info msg="connecting to shim 4f243367801fab813a8767264e2106beb2cf8a59fd51d142e2afb80ca78a83dc" address="unix:///run/containerd/s/5a16fc6d7d6675b04c6647da7cb222edc8b196a98e459ac560ac4509fb7ed792" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:46.721566 kubelet[2783]: E0527 03:23:46.721309 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.721566 kubelet[2783]: W0527 03:23:46.721417 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.721566 kubelet[2783]: E0527 03:23:46.721455 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.723039 kubelet[2783]: E0527 03:23:46.721864 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.723039 kubelet[2783]: W0527 03:23:46.721883 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.723039 kubelet[2783]: E0527 03:23:46.721906 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.723039 kubelet[2783]: E0527 03:23:46.722389 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.723039 kubelet[2783]: W0527 03:23:46.722412 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.723039 kubelet[2783]: E0527 03:23:46.722461 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.724801 kubelet[2783]: E0527 03:23:46.724401 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.724801 kubelet[2783]: W0527 03:23:46.724423 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.724801 kubelet[2783]: E0527 03:23:46.724485 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.725360 kubelet[2783]: E0527 03:23:46.725127 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.725360 kubelet[2783]: W0527 03:23:46.725151 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.725360 kubelet[2783]: E0527 03:23:46.725169 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.726164 kubelet[2783]: E0527 03:23:46.725605 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.726164 kubelet[2783]: W0527 03:23:46.725624 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.726164 kubelet[2783]: E0527 03:23:46.725654 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.726164 kubelet[2783]: E0527 03:23:46.726016 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.726164 kubelet[2783]: W0527 03:23:46.726031 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.726164 kubelet[2783]: E0527 03:23:46.726048 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.726555 kubelet[2783]: E0527 03:23:46.726456 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.726555 kubelet[2783]: W0527 03:23:46.726471 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.726555 kubelet[2783]: E0527 03:23:46.726488 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.727458 kubelet[2783]: E0527 03:23:46.726912 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.727458 kubelet[2783]: W0527 03:23:46.726926 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.727458 kubelet[2783]: E0527 03:23:46.726942 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.728507 kubelet[2783]: E0527 03:23:46.727645 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.728507 kubelet[2783]: W0527 03:23:46.727661 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.728507 kubelet[2783]: E0527 03:23:46.727681 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.728507 kubelet[2783]: E0527 03:23:46.728302 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.728507 kubelet[2783]: W0527 03:23:46.728330 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.728507 kubelet[2783]: E0527 03:23:46.728376 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.730407 kubelet[2783]: E0527 03:23:46.729049 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.730407 kubelet[2783]: W0527 03:23:46.729088 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.730407 kubelet[2783]: E0527 03:23:46.729163 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.730407 kubelet[2783]: E0527 03:23:46.729845 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.730407 kubelet[2783]: W0527 03:23:46.729860 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.730407 kubelet[2783]: E0527 03:23:46.729876 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.730407 kubelet[2783]: E0527 03:23:46.730386 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.730407 kubelet[2783]: W0527 03:23:46.730401 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.731092 kubelet[2783]: E0527 03:23:46.730418 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.731092 kubelet[2783]: E0527 03:23:46.730969 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.731092 kubelet[2783]: W0527 03:23:46.730984 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.731092 kubelet[2783]: E0527 03:23:46.731001 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.732024 kubelet[2783]: E0527 03:23:46.731996 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.732024 kubelet[2783]: W0527 03:23:46.732021 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.732153 kubelet[2783]: E0527 03:23:46.732040 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.733390 kubelet[2783]: E0527 03:23:46.732682 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.733390 kubelet[2783]: W0527 03:23:46.732706 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.733390 kubelet[2783]: E0527 03:23:46.732724 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.733590 kubelet[2783]: E0527 03:23:46.733562 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.733590 kubelet[2783]: W0527 03:23:46.733578 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.733700 kubelet[2783]: E0527 03:23:46.733595 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.734014 kubelet[2783]: E0527 03:23:46.733975 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.734014 kubelet[2783]: W0527 03:23:46.733994 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.734014 kubelet[2783]: E0527 03:23:46.734011 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.734727 kubelet[2783]: E0527 03:23:46.734693 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.734727 kubelet[2783]: W0527 03:23:46.734712 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.734727 kubelet[2783]: E0527 03:23:46.734728 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.735440 kubelet[2783]: E0527 03:23:46.735413 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.735440 kubelet[2783]: W0527 03:23:46.735435 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.735566 kubelet[2783]: E0527 03:23:46.735455 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.735566 kubelet[2783]: I0527 03:23:46.735520 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dca63bae-90e6-4118-afb8-b162c7c3ea6d-socket-dir\") pod \"csi-node-driver-r5t8w\" (UID: \"dca63bae-90e6-4118-afb8-b162c7c3ea6d\") " pod="calico-system/csi-node-driver-r5t8w" May 27 03:23:46.736156 kubelet[2783]: E0527 03:23:46.735887 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.736156 kubelet[2783]: W0527 03:23:46.735904 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.736156 kubelet[2783]: E0527 03:23:46.736071 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.737371 kubelet[2783]: I0527 03:23:46.736105 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dca63bae-90e6-4118-afb8-b162c7c3ea6d-registration-dir\") pod \"csi-node-driver-r5t8w\" (UID: \"dca63bae-90e6-4118-afb8-b162c7c3ea6d\") " pod="calico-system/csi-node-driver-r5t8w" May 27 03:23:46.737371 kubelet[2783]: E0527 03:23:46.736903 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.737371 kubelet[2783]: W0527 03:23:46.736916 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.737371 kubelet[2783]: E0527 03:23:46.737111 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.739383 kubelet[2783]: E0527 03:23:46.738421 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.739383 kubelet[2783]: W0527 03:23:46.738442 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.739383 kubelet[2783]: E0527 03:23:46.738514 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.740828 kubelet[2783]: E0527 03:23:46.740116 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.740828 kubelet[2783]: W0527 03:23:46.740141 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.741283 kubelet[2783]: E0527 03:23:46.741146 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.741283 kubelet[2783]: W0527 03:23:46.741167 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.741283 kubelet[2783]: E0527 03:23:46.741186 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.741836 kubelet[2783]: E0527 03:23:46.741798 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.742414 kubelet[2783]: I0527 03:23:46.741952 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzxvf\" (UniqueName: \"kubernetes.io/projected/dca63bae-90e6-4118-afb8-b162c7c3ea6d-kube-api-access-jzxvf\") pod \"csi-node-driver-r5t8w\" (UID: \"dca63bae-90e6-4118-afb8-b162c7c3ea6d\") " pod="calico-system/csi-node-driver-r5t8w" May 27 03:23:46.743369 kubelet[2783]: E0527 03:23:46.742519 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.743369 kubelet[2783]: W0527 03:23:46.742835 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.743369 kubelet[2783]: E0527 03:23:46.742854 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.745375 kubelet[2783]: E0527 03:23:46.744281 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.745375 kubelet[2783]: W0527 03:23:46.744302 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.745375 kubelet[2783]: E0527 03:23:46.744623 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.745904 kubelet[2783]: E0527 03:23:46.745877 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.745904 kubelet[2783]: W0527 03:23:46.745904 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.747369 kubelet[2783]: E0527 03:23:46.746542 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.748819 kubelet[2783]: E0527 03:23:46.748791 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.748819 kubelet[2783]: W0527 03:23:46.748816 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.748972 kubelet[2783]: E0527 03:23:46.748833 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.748972 kubelet[2783]: I0527 03:23:46.748868 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dca63bae-90e6-4118-afb8-b162c7c3ea6d-varrun\") pod \"csi-node-driver-r5t8w\" (UID: \"dca63bae-90e6-4118-afb8-b162c7c3ea6d\") " pod="calico-system/csi-node-driver-r5t8w" May 27 03:23:46.752048 kubelet[2783]: E0527 03:23:46.751943 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.752048 kubelet[2783]: W0527 03:23:46.751971 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.752048 kubelet[2783]: E0527 03:23:46.751991 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.752048 kubelet[2783]: I0527 03:23:46.752018 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dca63bae-90e6-4118-afb8-b162c7c3ea6d-kubelet-dir\") pod \"csi-node-driver-r5t8w\" (UID: \"dca63bae-90e6-4118-afb8-b162c7c3ea6d\") " pod="calico-system/csi-node-driver-r5t8w" May 27 03:23:46.753385 kubelet[2783]: E0527 03:23:46.753115 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.753385 kubelet[2783]: W0527 03:23:46.753141 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.753385 kubelet[2783]: E0527 03:23:46.753160 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.753979 kubelet[2783]: E0527 03:23:46.753737 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.753979 kubelet[2783]: W0527 03:23:46.753752 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.753979 kubelet[2783]: E0527 03:23:46.753772 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.755276 kubelet[2783]: E0527 03:23:46.754527 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.755276 kubelet[2783]: W0527 03:23:46.754548 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.755276 kubelet[2783]: E0527 03:23:46.754566 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.756280 kubelet[2783]: E0527 03:23:46.756209 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.756280 kubelet[2783]: W0527 03:23:46.756233 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.756686 kubelet[2783]: E0527 03:23:46.756249 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.758608 systemd[1]: Started cri-containerd-4f243367801fab813a8767264e2106beb2cf8a59fd51d142e2afb80ca78a83dc.scope - libcontainer container 4f243367801fab813a8767264e2106beb2cf8a59fd51d142e2afb80ca78a83dc. May 27 03:23:46.854947 kubelet[2783]: E0527 03:23:46.854080 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.854947 kubelet[2783]: W0527 03:23:46.854111 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.854947 kubelet[2783]: E0527 03:23:46.854140 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.859165 kubelet[2783]: E0527 03:23:46.855665 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.859165 kubelet[2783]: W0527 03:23:46.855715 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.859165 kubelet[2783]: E0527 03:23:46.855749 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.859165 kubelet[2783]: E0527 03:23:46.856325 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.859165 kubelet[2783]: W0527 03:23:46.856410 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.859165 kubelet[2783]: E0527 03:23:46.856446 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.862576 kubelet[2783]: E0527 03:23:46.862353 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.862576 kubelet[2783]: W0527 03:23:46.862378 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.862576 kubelet[2783]: E0527 03:23:46.862403 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.862812 kubelet[2783]: E0527 03:23:46.862797 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.862879 kubelet[2783]: W0527 03:23:46.862812 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.862943 kubelet[2783]: E0527 03:23:46.862925 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.865545 kubelet[2783]: E0527 03:23:46.863465 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.865545 kubelet[2783]: W0527 03:23:46.863484 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.865545 kubelet[2783]: E0527 03:23:46.863793 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.865545 kubelet[2783]: E0527 03:23:46.864301 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.865545 kubelet[2783]: W0527 03:23:46.864318 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.865545 kubelet[2783]: E0527 03:23:46.864467 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.865545 kubelet[2783]: E0527 03:23:46.864882 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.865545 kubelet[2783]: W0527 03:23:46.864899 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.865545 kubelet[2783]: E0527 03:23:46.865094 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.865545 kubelet[2783]: E0527 03:23:46.865525 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.866029 kubelet[2783]: W0527 03:23:46.865540 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.866029 kubelet[2783]: E0527 03:23:46.865725 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.868309 kubelet[2783]: E0527 03:23:46.867115 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.868309 kubelet[2783]: W0527 03:23:46.867138 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.868309 kubelet[2783]: E0527 03:23:46.867456 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.868309 kubelet[2783]: E0527 03:23:46.868017 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.868309 kubelet[2783]: W0527 03:23:46.868034 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.868309 kubelet[2783]: E0527 03:23:46.868171 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.871360 kubelet[2783]: E0527 03:23:46.870005 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.871360 kubelet[2783]: W0527 03:23:46.870026 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.871360 kubelet[2783]: E0527 03:23:46.870603 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.871608 kubelet[2783]: E0527 03:23:46.871441 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.871608 kubelet[2783]: W0527 03:23:46.871458 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.872876 kubelet[2783]: E0527 03:23:46.871943 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.872876 kubelet[2783]: E0527 03:23:46.872610 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.872876 kubelet[2783]: W0527 03:23:46.872627 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.873469 kubelet[2783]: E0527 03:23:46.873434 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.873584 kubelet[2783]: E0527 03:23:46.873563 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.873644 kubelet[2783]: W0527 03:23:46.873585 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.874329 kubelet[2783]: E0527 03:23:46.874127 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.874329 kubelet[2783]: W0527 03:23:46.874149 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.874806 kubelet[2783]: E0527 03:23:46.874785 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.874806 kubelet[2783]: W0527 03:23:46.874807 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.875471 kubelet[2783]: E0527 03:23:46.875441 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.875471 kubelet[2783]: W0527 03:23:46.875466 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.876590 kubelet[2783]: E0527 03:23:46.876567 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.876590 kubelet[2783]: W0527 03:23:46.876590 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.876728 kubelet[2783]: E0527 03:23:46.876607 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.876913 kubelet[2783]: E0527 03:23:46.876890 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.876913 kubelet[2783]: W0527 03:23:46.876912 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.877035 kubelet[2783]: E0527 03:23:46.876929 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.878368 kubelet[2783]: E0527 03:23:46.877760 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.878368 kubelet[2783]: W0527 03:23:46.877780 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.878368 kubelet[2783]: E0527 03:23:46.877798 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.878368 kubelet[2783]: E0527 03:23:46.877833 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.878368 kubelet[2783]: E0527 03:23:46.878086 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.878368 kubelet[2783]: W0527 03:23:46.878099 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.878368 kubelet[2783]: E0527 03:23:46.878115 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.878815 kubelet[2783]: E0527 03:23:46.878769 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.879994 kubelet[2783]: W0527 03:23:46.879618 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.879994 kubelet[2783]: E0527 03:23:46.879654 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.879994 kubelet[2783]: E0527 03:23:46.879684 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.881709 kubelet[2783]: E0527 03:23:46.881676 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.881709 kubelet[2783]: W0527 03:23:46.881701 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.881872 kubelet[2783]: E0527 03:23:46.881718 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.881872 kubelet[2783]: E0527 03:23:46.881750 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.881872 kubelet[2783]: E0527 03:23:46.881766 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.882496 kubelet[2783]: E0527 03:23:46.882460 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.882950 kubelet[2783]: W0527 03:23:46.882480 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.882950 kubelet[2783]: E0527 03:23:46.882855 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.890257 containerd[1566]: time="2025-05-27T03:23:46.890202289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rkqsc,Uid:7942a646-4de0-4447-8d88-7ad37451ec5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f243367801fab813a8767264e2106beb2cf8a59fd51d142e2afb80ca78a83dc\"" May 27 03:23:46.905631 kubelet[2783]: E0527 03:23:46.905600 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.905631 kubelet[2783]: W0527 03:23:46.905627 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.905818 kubelet[2783]: E0527 03:23:46.905653 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:47.553940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805451002.mount: Deactivated successfully. May 27 03:23:48.647801 containerd[1566]: time="2025-05-27T03:23:48.647727934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:48.649107 containerd[1566]: time="2025-05-27T03:23:48.648907384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 03:23:48.650044 containerd[1566]: time="2025-05-27T03:23:48.650000299Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:48.653095 containerd[1566]: time="2025-05-27T03:23:48.653059912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:48.654150 containerd[1566]: time="2025-05-27T03:23:48.654012320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.118475591s" May 27 03:23:48.654150 containerd[1566]: time="2025-05-27T03:23:48.654052505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:23:48.656061 containerd[1566]: time="2025-05-27T03:23:48.656001157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:23:48.679486 containerd[1566]: time="2025-05-27T03:23:48.679438281Z" level=info msg="CreateContainer within sandbox \"4c19a88d3f1b4be168d98a957024836e3220fc4a3c1dee84dbe4e46d6c624bae\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:23:48.693812 containerd[1566]: time="2025-05-27T03:23:48.693496522Z" level=info msg="Container f5b57c339226ef8bd846b847f14e0c16275f9d19f2b1aede93af60207eeac3af: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:48.716976 containerd[1566]: time="2025-05-27T03:23:48.716919683Z" level=info msg="CreateContainer within sandbox \"4c19a88d3f1b4be168d98a957024836e3220fc4a3c1dee84dbe4e46d6c624bae\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f5b57c339226ef8bd846b847f14e0c16275f9d19f2b1aede93af60207eeac3af\"" May 27 03:23:48.719727 kubelet[2783]: E0527 03:23:48.719656 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r5t8w" podUID="dca63bae-90e6-4118-afb8-b162c7c3ea6d" May 27 03:23:48.721674 containerd[1566]: time="2025-05-27T03:23:48.721631535Z" level=info msg="StartContainer for \"f5b57c339226ef8bd846b847f14e0c16275f9d19f2b1aede93af60207eeac3af\"" May 27 03:23:48.724114 containerd[1566]: time="2025-05-27T03:23:48.724068473Z" level=info msg="connecting to shim f5b57c339226ef8bd846b847f14e0c16275f9d19f2b1aede93af60207eeac3af" address="unix:///run/containerd/s/57aa96137733c65b0f4a3fb8686b67d7bdc7b9a63d9420c42fe0403120337328" protocol=ttrpc version=3 May 27 03:23:48.771575 systemd[1]: Started cri-containerd-f5b57c339226ef8bd846b847f14e0c16275f9d19f2b1aede93af60207eeac3af.scope - libcontainer container f5b57c339226ef8bd846b847f14e0c16275f9d19f2b1aede93af60207eeac3af. May 27 03:23:48.852982 containerd[1566]: time="2025-05-27T03:23:48.852897354Z" level=info msg="StartContainer for \"f5b57c339226ef8bd846b847f14e0c16275f9d19f2b1aede93af60207eeac3af\" returns successfully" May 27 03:23:48.960200 kubelet[2783]: E0527 03:23:48.958306 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.960200 kubelet[2783]: W0527 03:23:48.958401 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.960200 kubelet[2783]: E0527 03:23:48.958436 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.960200 kubelet[2783]: E0527 03:23:48.958843 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.960200 kubelet[2783]: W0527 03:23:48.958858 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.960200 kubelet[2783]: E0527 03:23:48.958876 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.960200 kubelet[2783]: E0527 03:23:48.959197 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.960200 kubelet[2783]: W0527 03:23:48.959214 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.960200 kubelet[2783]: E0527 03:23:48.959231 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.960200 kubelet[2783]: E0527 03:23:48.959631 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.960964 kubelet[2783]: W0527 03:23:48.959651 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.960964 kubelet[2783]: E0527 03:23:48.959668 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.960964 kubelet[2783]: E0527 03:23:48.960003 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.960964 kubelet[2783]: W0527 03:23:48.960019 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.960964 kubelet[2783]: E0527 03:23:48.960037 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.960964 kubelet[2783]: E0527 03:23:48.960724 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.960964 kubelet[2783]: W0527 03:23:48.960739 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.961312 kubelet[2783]: E0527 03:23:48.960758 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.963820 kubelet[2783]: E0527 03:23:48.961720 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.963820 kubelet[2783]: W0527 03:23:48.961744 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.963820 kubelet[2783]: E0527 03:23:48.961763 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.963820 kubelet[2783]: E0527 03:23:48.962532 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.963820 kubelet[2783]: W0527 03:23:48.962548 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.963820 kubelet[2783]: E0527 03:23:48.962565 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.963820 kubelet[2783]: E0527 03:23:48.962854 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.963820 kubelet[2783]: W0527 03:23:48.962867 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.963820 kubelet[2783]: E0527 03:23:48.962882 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.963820 kubelet[2783]: E0527 03:23:48.963138 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.964968 kubelet[2783]: W0527 03:23:48.963151 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.964968 kubelet[2783]: E0527 03:23:48.963166 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.964968 kubelet[2783]: E0527 03:23:48.963438 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.964968 kubelet[2783]: W0527 03:23:48.963452 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.964968 kubelet[2783]: E0527 03:23:48.963468 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.964968 kubelet[2783]: E0527 03:23:48.963715 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.964968 kubelet[2783]: W0527 03:23:48.963734 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.964968 kubelet[2783]: E0527 03:23:48.963749 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.964968 kubelet[2783]: E0527 03:23:48.963999 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.964968 kubelet[2783]: W0527 03:23:48.964011 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.966009 kubelet[2783]: E0527 03:23:48.964026 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.966009 kubelet[2783]: E0527 03:23:48.964272 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.966009 kubelet[2783]: W0527 03:23:48.964285 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.966009 kubelet[2783]: E0527 03:23:48.964299 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.966009 kubelet[2783]: E0527 03:23:48.964573 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.966009 kubelet[2783]: W0527 03:23:48.964589 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.966009 kubelet[2783]: E0527 03:23:48.964607 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.984960 kubelet[2783]: E0527 03:23:48.984928 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.985365 kubelet[2783]: W0527 03:23:48.985158 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.985365 kubelet[2783]: E0527 03:23:48.985201 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.985973 kubelet[2783]: E0527 03:23:48.985929 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.985973 kubelet[2783]: W0527 03:23:48.985949 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.986270 kubelet[2783]: E0527 03:23:48.986166 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.986712 kubelet[2783]: E0527 03:23:48.986670 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.986712 kubelet[2783]: W0527 03:23:48.986689 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.986991 kubelet[2783]: E0527 03:23:48.986892 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.988037 kubelet[2783]: E0527 03:23:48.987836 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.988037 kubelet[2783]: W0527 03:23:48.987856 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.988037 kubelet[2783]: E0527 03:23:48.987875 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.988561 kubelet[2783]: E0527 03:23:48.988536 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.988561 kubelet[2783]: W0527 03:23:48.988560 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.988698 kubelet[2783]: E0527 03:23:48.988595 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.989454 kubelet[2783]: E0527 03:23:48.989427 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.989454 kubelet[2783]: W0527 03:23:48.989453 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.989824 kubelet[2783]: E0527 03:23:48.989489 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.991244 kubelet[2783]: E0527 03:23:48.990232 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.991244 kubelet[2783]: W0527 03:23:48.990248 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.991244 kubelet[2783]: E0527 03:23:48.990358 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.991481 kubelet[2783]: E0527 03:23:48.991456 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.991481 kubelet[2783]: W0527 03:23:48.991479 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.991611 kubelet[2783]: E0527 03:23:48.991564 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.991832 kubelet[2783]: E0527 03:23:48.991807 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.991832 kubelet[2783]: W0527 03:23:48.991830 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.991976 kubelet[2783]: E0527 03:23:48.991924 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.992218 kubelet[2783]: E0527 03:23:48.992196 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.992218 kubelet[2783]: W0527 03:23:48.992217 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.992360 kubelet[2783]: E0527 03:23:48.992318 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.993810 kubelet[2783]: E0527 03:23:48.993779 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.993810 kubelet[2783]: W0527 03:23:48.993799 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.993959 kubelet[2783]: E0527 03:23:48.993822 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.994189 kubelet[2783]: E0527 03:23:48.994167 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.994256 kubelet[2783]: W0527 03:23:48.994191 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.994313 kubelet[2783]: E0527 03:23:48.994284 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.994836 kubelet[2783]: E0527 03:23:48.994808 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.994836 kubelet[2783]: W0527 03:23:48.994833 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.994967 kubelet[2783]: E0527 03:23:48.994933 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.995230 kubelet[2783]: E0527 03:23:48.995207 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.995230 kubelet[2783]: W0527 03:23:48.995232 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.995667 kubelet[2783]: E0527 03:23:48.995447 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.995839 kubelet[2783]: E0527 03:23:48.995813 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.995839 kubelet[2783]: W0527 03:23:48.995838 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.995972 kubelet[2783]: E0527 03:23:48.995861 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.996472 kubelet[2783]: E0527 03:23:48.996447 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.996472 kubelet[2783]: W0527 03:23:48.996469 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.996624 kubelet[2783]: E0527 03:23:48.996486 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.996906 kubelet[2783]: E0527 03:23:48.996880 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.996906 kubelet[2783]: W0527 03:23:48.996903 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.997023 kubelet[2783]: E0527 03:23:48.996921 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:48.999361 kubelet[2783]: E0527 03:23:48.998504 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:48.999361 kubelet[2783]: W0527 03:23:48.998525 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:48.999361 kubelet[2783]: E0527 03:23:48.998543 2783 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:49.585715 containerd[1566]: time="2025-05-27T03:23:49.585646782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:49.586685 containerd[1566]: time="2025-05-27T03:23:49.586624819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 03:23:49.587951 containerd[1566]: time="2025-05-27T03:23:49.587884027Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:49.590690 containerd[1566]: time="2025-05-27T03:23:49.590628185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:49.591909 containerd[1566]: time="2025-05-27T03:23:49.591433045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 935.100325ms" May 27 03:23:49.591909 containerd[1566]: time="2025-05-27T03:23:49.591478785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:23:49.595544 containerd[1566]: time="2025-05-27T03:23:49.595486922Z" level=info msg="CreateContainer within sandbox \"4f243367801fab813a8767264e2106beb2cf8a59fd51d142e2afb80ca78a83dc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:23:49.606401 containerd[1566]: time="2025-05-27T03:23:49.606367102Z" level=info msg="Container 291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:49.616741 containerd[1566]: time="2025-05-27T03:23:49.616683454Z" level=info msg="CreateContainer within sandbox \"4f243367801fab813a8767264e2106beb2cf8a59fd51d142e2afb80ca78a83dc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454\"" May 27 03:23:49.617470 containerd[1566]: time="2025-05-27T03:23:49.617395282Z" level=info msg="StartContainer for \"291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454\"" May 27 03:23:49.623273 containerd[1566]: time="2025-05-27T03:23:49.623188048Z" level=info msg="connecting to shim 291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454" address="unix:///run/containerd/s/5a16fc6d7d6675b04c6647da7cb222edc8b196a98e459ac560ac4509fb7ed792" protocol=ttrpc version=3 May 27 03:23:49.657556 systemd[1]: Started cri-containerd-291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454.scope - libcontainer container 291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454. May 27 03:23:49.728138 containerd[1566]: time="2025-05-27T03:23:49.728081401Z" level=info msg="StartContainer for \"291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454\" returns successfully" May 27 03:23:49.736640 systemd[1]: cri-containerd-291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454.scope: Deactivated successfully. May 27 03:23:49.742525 containerd[1566]: time="2025-05-27T03:23:49.742389065Z" level=info msg="received exit event container_id:\"291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454\" id:\"291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454\" pid:3475 exited_at:{seconds:1748316229 nanos:741856116}" May 27 03:23:49.742525 containerd[1566]: time="2025-05-27T03:23:49.742478901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454\" id:\"291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454\" pid:3475 exited_at:{seconds:1748316229 nanos:741856116}" May 27 03:23:49.776638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-291abd7072798845a2c0da88f8fd6d791393e90fd0fc960dc0887e8c617d7454-rootfs.mount: Deactivated successfully. May 27 03:23:49.941971 kubelet[2783]: I0527 03:23:49.867174 2783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:23:49.941971 kubelet[2783]: I0527 03:23:49.887764 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-788f9bcf5b-ztvtt" podStartSLOduration=2.76595703 podStartE2EDuration="4.886826052s" podCreationTimestamp="2025-05-27 03:23:45 +0000 UTC" firstStartedPulling="2025-05-27 03:23:46.5346017 +0000 UTC m=+22.998634304" lastFinishedPulling="2025-05-27 03:23:48.655470726 +0000 UTC m=+25.119503326" observedRunningTime="2025-05-27 03:23:48.882185133 +0000 UTC m=+25.346217742" watchObservedRunningTime="2025-05-27 03:23:49.886826052 +0000 UTC m=+26.350858662" May 27 03:23:50.718626 kubelet[2783]: E0527 03:23:50.718514 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r5t8w" podUID="dca63bae-90e6-4118-afb8-b162c7c3ea6d" May 27 03:23:51.877600 containerd[1566]: time="2025-05-27T03:23:51.877522649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:23:52.719971 kubelet[2783]: E0527 03:23:52.719908 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r5t8w" podUID="dca63bae-90e6-4118-afb8-b162c7c3ea6d" May 27 03:23:53.994216 kubelet[2783]: I0527 03:23:53.994177 2783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:23:54.720192 kubelet[2783]: E0527 03:23:54.718912 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r5t8w" podUID="dca63bae-90e6-4118-afb8-b162c7c3ea6d" May 27 03:23:54.896034 containerd[1566]: time="2025-05-27T03:23:54.895975097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:54.897291 containerd[1566]: time="2025-05-27T03:23:54.897233302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:23:54.898771 containerd[1566]: time="2025-05-27T03:23:54.898635337Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:54.901355 containerd[1566]: time="2025-05-27T03:23:54.901276536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:54.902310 containerd[1566]: time="2025-05-27T03:23:54.902145356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.024430361s" May 27 03:23:54.902310 containerd[1566]: time="2025-05-27T03:23:54.902186588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:23:54.905484 containerd[1566]: time="2025-05-27T03:23:54.905444440Z" level=info msg="CreateContainer within sandbox \"4f243367801fab813a8767264e2106beb2cf8a59fd51d142e2afb80ca78a83dc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:23:54.920488 containerd[1566]: time="2025-05-27T03:23:54.919590903Z" level=info msg="Container 4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:54.938011 containerd[1566]: time="2025-05-27T03:23:54.937965130Z" level=info msg="CreateContainer within sandbox \"4f243367801fab813a8767264e2106beb2cf8a59fd51d142e2afb80ca78a83dc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c\"" May 27 03:23:54.939075 containerd[1566]: time="2025-05-27T03:23:54.939011732Z" level=info msg="StartContainer for \"4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c\"" May 27 03:23:54.942730 containerd[1566]: time="2025-05-27T03:23:54.942682095Z" level=info msg="connecting to shim 4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c" address="unix:///run/containerd/s/5a16fc6d7d6675b04c6647da7cb222edc8b196a98e459ac560ac4509fb7ed792" protocol=ttrpc version=3 May 27 03:23:54.982594 systemd[1]: Started cri-containerd-4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c.scope - libcontainer container 4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c. May 27 03:23:55.040615 containerd[1566]: time="2025-05-27T03:23:55.040492014Z" level=info msg="StartContainer for \"4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c\" returns successfully" May 27 03:23:56.022923 systemd[1]: cri-containerd-4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c.scope: Deactivated successfully. May 27 03:23:56.024184 systemd[1]: cri-containerd-4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c.scope: Consumed 634ms CPU time, 190.9M memory peak, 170.9M written to disk. May 27 03:23:56.033875 containerd[1566]: time="2025-05-27T03:23:56.033764373Z" level=info msg="received exit event container_id:\"4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c\" id:\"4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c\" pid:3536 exited_at:{seconds:1748316236 nanos:33383205}" May 27 03:23:56.035649 containerd[1566]: time="2025-05-27T03:23:56.035421545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c\" id:\"4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c\" pid:3536 exited_at:{seconds:1748316236 nanos:33383205}" May 27 03:23:56.036739 kubelet[2783]: I0527 03:23:56.036707 2783 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:23:56.090161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d28f20878c0fdd704b0140a7962e84c415330d1473776ebe32eae4def6b594c-rootfs.mount: Deactivated successfully. May 27 03:23:56.118450 systemd[1]: Created slice kubepods-besteffort-podc39f5529_814a_4886_be7d_327a6bc32817.slice - libcontainer container kubepods-besteffort-podc39f5529_814a_4886_be7d_327a6bc32817.slice. May 27 03:23:56.135195 systemd[1]: Created slice kubepods-burstable-pod597bcafa_bfd9_4d2c_864e_2f15d6a1d21c.slice - libcontainer container kubepods-burstable-pod597bcafa_bfd9_4d2c_864e_2f15d6a1d21c.slice. May 27 03:23:56.143215 kubelet[2783]: W0527 03:23:56.139647 2783 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal' and this object May 27 03:23:56.143215 kubelet[2783]: E0527 03:23:56.139699 2783 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal' and this object" logger="UnhandledError" May 27 03:23:56.165686 systemd[1]: Created slice kubepods-burstable-pode0ea8984_88dc_48df_a02b_bebd3dda937c.slice - libcontainer container kubepods-burstable-pode0ea8984_88dc_48df_a02b_bebd3dda937c.slice. May 27 03:23:56.182619 systemd[1]: Created slice kubepods-besteffort-pod662bc28a_aa93_4b26_9e56_f8dfc5b781cd.slice - libcontainer container kubepods-besteffort-pod662bc28a_aa93_4b26_9e56_f8dfc5b781cd.slice. May 27 03:23:56.195631 systemd[1]: Created slice kubepods-besteffort-pod40f80b17_095e_4372_a4ff_219cc92c5c20.slice - libcontainer container kubepods-besteffort-pod40f80b17_095e_4372_a4ff_219cc92c5c20.slice. May 27 03:23:56.209195 systemd[1]: Created slice kubepods-besteffort-podfece5a2d_649d_4220_838d_a0db5be7df49.slice - libcontainer container kubepods-besteffort-podfece5a2d_649d_4220_838d_a0db5be7df49.slice. May 27 03:23:56.220765 systemd[1]: Created slice kubepods-besteffort-pod051ad55f_d117_4e55_864a_7db1417bad0a.slice - libcontainer container kubepods-besteffort-pod051ad55f_d117_4e55_864a_7db1417bad0a.slice. May 27 03:23:56.333488 kubelet[2783]: I0527 03:23:56.291648 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fece5a2d-649d-4220-838d-a0db5be7df49-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-526wj\" (UID: \"fece5a2d-649d-4220-838d-a0db5be7df49\") " pod="calico-system/goldmane-78d55f7ddc-526wj" May 27 03:23:56.333488 kubelet[2783]: I0527 03:23:56.292579 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqflz\" (UniqueName: \"kubernetes.io/projected/c39f5529-814a-4886-be7d-327a6bc32817-kube-api-access-dqflz\") pod \"whisker-6df54485ff-xfn7l\" (UID: \"c39f5529-814a-4886-be7d-327a6bc32817\") " pod="calico-system/whisker-6df54485ff-xfn7l" May 27 03:23:56.333488 kubelet[2783]: I0527 03:23:56.292988 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmts6\" (UniqueName: \"kubernetes.io/projected/051ad55f-d117-4e55-864a-7db1417bad0a-kube-api-access-lmts6\") pod \"calico-apiserver-5854b5b658-n8nn4\" (UID: \"051ad55f-d117-4e55-864a-7db1417bad0a\") " pod="calico-apiserver/calico-apiserver-5854b5b658-n8nn4" May 27 03:23:56.333488 kubelet[2783]: I0527 03:23:56.293048 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfw7r\" (UniqueName: \"kubernetes.io/projected/fece5a2d-649d-4220-838d-a0db5be7df49-kube-api-access-nfw7r\") pod \"goldmane-78d55f7ddc-526wj\" (UID: \"fece5a2d-649d-4220-838d-a0db5be7df49\") " pod="calico-system/goldmane-78d55f7ddc-526wj" May 27 03:23:56.333488 kubelet[2783]: I0527 03:23:56.293101 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4gf4\" (UniqueName: \"kubernetes.io/projected/597bcafa-bfd9-4d2c-864e-2f15d6a1d21c-kube-api-access-j4gf4\") pod \"coredns-668d6bf9bc-c5nrz\" (UID: \"597bcafa-bfd9-4d2c-864e-2f15d6a1d21c\") " pod="kube-system/coredns-668d6bf9bc-c5nrz" May 27 03:23:56.336195 kubelet[2783]: I0527 03:23:56.293130 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fece5a2d-649d-4220-838d-a0db5be7df49-config\") pod \"goldmane-78d55f7ddc-526wj\" (UID: \"fece5a2d-649d-4220-838d-a0db5be7df49\") " pod="calico-system/goldmane-78d55f7ddc-526wj" May 27 03:23:56.336195 kubelet[2783]: I0527 03:23:56.293181 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39f5529-814a-4886-be7d-327a6bc32817-whisker-ca-bundle\") pod \"whisker-6df54485ff-xfn7l\" (UID: \"c39f5529-814a-4886-be7d-327a6bc32817\") " pod="calico-system/whisker-6df54485ff-xfn7l" May 27 03:23:56.336195 kubelet[2783]: I0527 03:23:56.293211 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40f80b17-095e-4372-a4ff-219cc92c5c20-calico-apiserver-certs\") pod \"calico-apiserver-5854b5b658-fprlv\" (UID: \"40f80b17-095e-4372-a4ff-219cc92c5c20\") " pod="calico-apiserver/calico-apiserver-5854b5b658-fprlv" May 27 03:23:56.336195 kubelet[2783]: I0527 03:23:56.293260 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/662bc28a-aa93-4b26-9e56-f8dfc5b781cd-tigera-ca-bundle\") pod \"calico-kube-controllers-865759b8fd-5m84j\" (UID: \"662bc28a-aa93-4b26-9e56-f8dfc5b781cd\") " pod="calico-system/calico-kube-controllers-865759b8fd-5m84j" May 27 03:23:56.336195 kubelet[2783]: I0527 03:23:56.293294 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c39f5529-814a-4886-be7d-327a6bc32817-whisker-backend-key-pair\") pod \"whisker-6df54485ff-xfn7l\" (UID: \"c39f5529-814a-4886-be7d-327a6bc32817\") " pod="calico-system/whisker-6df54485ff-xfn7l" May 27 03:23:56.336490 kubelet[2783]: I0527 03:23:56.293418 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flvhm\" (UniqueName: \"kubernetes.io/projected/e0ea8984-88dc-48df-a02b-bebd3dda937c-kube-api-access-flvhm\") pod \"coredns-668d6bf9bc-6j5hq\" (UID: \"e0ea8984-88dc-48df-a02b-bebd3dda937c\") " pod="kube-system/coredns-668d6bf9bc-6j5hq" May 27 03:23:56.336490 kubelet[2783]: I0527 03:23:56.293590 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fece5a2d-649d-4220-838d-a0db5be7df49-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-526wj\" (UID: \"fece5a2d-649d-4220-838d-a0db5be7df49\") " pod="calico-system/goldmane-78d55f7ddc-526wj" May 27 03:23:56.336490 kubelet[2783]: I0527 03:23:56.293625 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0ea8984-88dc-48df-a02b-bebd3dda937c-config-volume\") pod \"coredns-668d6bf9bc-6j5hq\" (UID: \"e0ea8984-88dc-48df-a02b-bebd3dda937c\") " pod="kube-system/coredns-668d6bf9bc-6j5hq" May 27 03:23:56.336490 kubelet[2783]: I0527 03:23:56.293655 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94dz8\" (UniqueName: \"kubernetes.io/projected/40f80b17-095e-4372-a4ff-219cc92c5c20-kube-api-access-94dz8\") pod \"calico-apiserver-5854b5b658-fprlv\" (UID: \"40f80b17-095e-4372-a4ff-219cc92c5c20\") " pod="calico-apiserver/calico-apiserver-5854b5b658-fprlv" May 27 03:23:56.336490 kubelet[2783]: I0527 03:23:56.293683 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/051ad55f-d117-4e55-864a-7db1417bad0a-calico-apiserver-certs\") pod \"calico-apiserver-5854b5b658-n8nn4\" (UID: \"051ad55f-d117-4e55-864a-7db1417bad0a\") " pod="calico-apiserver/calico-apiserver-5854b5b658-n8nn4" May 27 03:23:56.336729 kubelet[2783]: I0527 03:23:56.293709 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/597bcafa-bfd9-4d2c-864e-2f15d6a1d21c-config-volume\") pod \"coredns-668d6bf9bc-c5nrz\" (UID: \"597bcafa-bfd9-4d2c-864e-2f15d6a1d21c\") " pod="kube-system/coredns-668d6bf9bc-c5nrz" May 27 03:23:56.336729 kubelet[2783]: I0527 03:23:56.293735 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdcd\" (UniqueName: \"kubernetes.io/projected/662bc28a-aa93-4b26-9e56-f8dfc5b781cd-kube-api-access-qcdcd\") pod \"calico-kube-controllers-865759b8fd-5m84j\" (UID: \"662bc28a-aa93-4b26-9e56-f8dfc5b781cd\") " pod="calico-system/calico-kube-controllers-865759b8fd-5m84j" May 27 03:23:56.491151 containerd[1566]: time="2025-05-27T03:23:56.491084259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-865759b8fd-5m84j,Uid:662bc28a-aa93-4b26-9e56-f8dfc5b781cd,Namespace:calico-system,Attempt:0,}" May 27 03:23:56.503067 containerd[1566]: time="2025-05-27T03:23:56.503014515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5854b5b658-fprlv,Uid:40f80b17-095e-4372-a4ff-219cc92c5c20,Namespace:calico-apiserver,Attempt:0,}" May 27 03:23:56.516981 containerd[1566]: time="2025-05-27T03:23:56.516932556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-526wj,Uid:fece5a2d-649d-4220-838d-a0db5be7df49,Namespace:calico-system,Attempt:0,}" May 27 03:23:56.527493 containerd[1566]: time="2025-05-27T03:23:56.527425383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5854b5b658-n8nn4,Uid:051ad55f-d117-4e55-864a-7db1417bad0a,Namespace:calico-apiserver,Attempt:0,}" May 27 03:23:56.729384 containerd[1566]: time="2025-05-27T03:23:56.728805443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6df54485ff-xfn7l,Uid:c39f5529-814a-4886-be7d-327a6bc32817,Namespace:calico-system,Attempt:0,}" May 27 03:23:56.741058 systemd[1]: Created slice kubepods-besteffort-poddca63bae_90e6_4118_afb8_b162c7c3ea6d.slice - libcontainer container kubepods-besteffort-poddca63bae_90e6_4118_afb8_b162c7c3ea6d.slice. May 27 03:23:56.750365 containerd[1566]: time="2025-05-27T03:23:56.749692229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r5t8w,Uid:dca63bae-90e6-4118-afb8-b162c7c3ea6d,Namespace:calico-system,Attempt:0,}" May 27 03:23:56.907520 containerd[1566]: time="2025-05-27T03:23:56.907473614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:23:56.988421 containerd[1566]: time="2025-05-27T03:23:56.988005060Z" level=error msg="Failed to destroy network for sandbox \"83c01d22e3e42e91b1aa544805594b92f30fe8ec75b29c0f358fd93e3abb834c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:56.993182 containerd[1566]: time="2025-05-27T03:23:56.993122208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r5t8w,Uid:dca63bae-90e6-4118-afb8-b162c7c3ea6d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c01d22e3e42e91b1aa544805594b92f30fe8ec75b29c0f358fd93e3abb834c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:56.993891 kubelet[2783]: E0527 03:23:56.993790 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c01d22e3e42e91b1aa544805594b92f30fe8ec75b29c0f358fd93e3abb834c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:56.994394 kubelet[2783]: E0527 03:23:56.994260 2783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c01d22e3e42e91b1aa544805594b92f30fe8ec75b29c0f358fd93e3abb834c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r5t8w" May 27 03:23:56.994394 kubelet[2783]: E0527 03:23:56.994310 2783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c01d22e3e42e91b1aa544805594b92f30fe8ec75b29c0f358fd93e3abb834c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r5t8w" May 27 03:23:56.995458 kubelet[2783]: E0527 03:23:56.995115 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r5t8w_calico-system(dca63bae-90e6-4118-afb8-b162c7c3ea6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r5t8w_calico-system(dca63bae-90e6-4118-afb8-b162c7c3ea6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83c01d22e3e42e91b1aa544805594b92f30fe8ec75b29c0f358fd93e3abb834c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r5t8w" podUID="dca63bae-90e6-4118-afb8-b162c7c3ea6d" May 27 03:23:57.005197 containerd[1566]: time="2025-05-27T03:23:57.005151816Z" level=error msg="Failed to destroy network for sandbox \"cd34e25fb4fbe2bb450a786d16e2298af8b9d01c4921b205d88a4c609a893244\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.005781 containerd[1566]: time="2025-05-27T03:23:57.005545048Z" level=error msg="Failed to destroy network for sandbox \"9bcfba631c65960ecfa94484ae37ddf270a26ea4867ffadb9a8c044caa7c2d21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.007284 containerd[1566]: time="2025-05-27T03:23:57.007233944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-865759b8fd-5m84j,Uid:662bc28a-aa93-4b26-9e56-f8dfc5b781cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bcfba631c65960ecfa94484ae37ddf270a26ea4867ffadb9a8c044caa7c2d21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.008883 kubelet[2783]: E0527 03:23:57.007710 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bcfba631c65960ecfa94484ae37ddf270a26ea4867ffadb9a8c044caa7c2d21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.008883 kubelet[2783]: E0527 03:23:57.007788 2783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bcfba631c65960ecfa94484ae37ddf270a26ea4867ffadb9a8c044caa7c2d21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-865759b8fd-5m84j" May 27 03:23:57.008883 kubelet[2783]: E0527 03:23:57.007821 2783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bcfba631c65960ecfa94484ae37ddf270a26ea4867ffadb9a8c044caa7c2d21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-865759b8fd-5m84j" May 27 03:23:57.009114 containerd[1566]: time="2025-05-27T03:23:57.008391551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-526wj,Uid:fece5a2d-649d-4220-838d-a0db5be7df49,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd34e25fb4fbe2bb450a786d16e2298af8b9d01c4921b205d88a4c609a893244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.009217 kubelet[2783]: E0527 03:23:57.007885 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-865759b8fd-5m84j_calico-system(662bc28a-aa93-4b26-9e56-f8dfc5b781cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-865759b8fd-5m84j_calico-system(662bc28a-aa93-4b26-9e56-f8dfc5b781cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bcfba631c65960ecfa94484ae37ddf270a26ea4867ffadb9a8c044caa7c2d21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-865759b8fd-5m84j" podUID="662bc28a-aa93-4b26-9e56-f8dfc5b781cd" May 27 03:23:57.010358 kubelet[2783]: E0527 03:23:57.010279 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd34e25fb4fbe2bb450a786d16e2298af8b9d01c4921b205d88a4c609a893244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.011265 kubelet[2783]: E0527 03:23:57.010545 2783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd34e25fb4fbe2bb450a786d16e2298af8b9d01c4921b205d88a4c609a893244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-526wj" May 27 03:23:57.011265 kubelet[2783]: E0527 03:23:57.011067 2783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd34e25fb4fbe2bb450a786d16e2298af8b9d01c4921b205d88a4c609a893244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-526wj" May 27 03:23:57.011265 kubelet[2783]: E0527 03:23:57.011157 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-526wj_calico-system(fece5a2d-649d-4220-838d-a0db5be7df49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-526wj_calico-system(fece5a2d-649d-4220-838d-a0db5be7df49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd34e25fb4fbe2bb450a786d16e2298af8b9d01c4921b205d88a4c609a893244\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-526wj" podUID="fece5a2d-649d-4220-838d-a0db5be7df49" May 27 03:23:57.028775 containerd[1566]: time="2025-05-27T03:23:57.028707128Z" level=error msg="Failed to destroy network for sandbox \"ae3a7dd72fe390ae799566dac8583d9d79adb103083db28dfe2ee79e0cdc3567\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.033097 containerd[1566]: time="2025-05-27T03:23:57.032998806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5854b5b658-fprlv,Uid:40f80b17-095e-4372-a4ff-219cc92c5c20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3a7dd72fe390ae799566dac8583d9d79adb103083db28dfe2ee79e0cdc3567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.035887 kubelet[2783]: E0527 03:23:57.033915 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3a7dd72fe390ae799566dac8583d9d79adb103083db28dfe2ee79e0cdc3567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.035887 kubelet[2783]: E0527 03:23:57.035440 2783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3a7dd72fe390ae799566dac8583d9d79adb103083db28dfe2ee79e0cdc3567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5854b5b658-fprlv" May 27 03:23:57.035887 kubelet[2783]: E0527 03:23:57.035486 2783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3a7dd72fe390ae799566dac8583d9d79adb103083db28dfe2ee79e0cdc3567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5854b5b658-fprlv" May 27 03:23:57.036119 kubelet[2783]: E0527 03:23:57.035568 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5854b5b658-fprlv_calico-apiserver(40f80b17-095e-4372-a4ff-219cc92c5c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5854b5b658-fprlv_calico-apiserver(40f80b17-095e-4372-a4ff-219cc92c5c20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae3a7dd72fe390ae799566dac8583d9d79adb103083db28dfe2ee79e0cdc3567\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5854b5b658-fprlv" podUID="40f80b17-095e-4372-a4ff-219cc92c5c20" May 27 03:23:57.039598 containerd[1566]: time="2025-05-27T03:23:57.039550358Z" level=error msg="Failed to destroy network for sandbox \"8284c13c07b9309e23a333d4b1bc4ab5e59f277177359d4f83896c53709c6902\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.041538 containerd[1566]: time="2025-05-27T03:23:57.041450080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5854b5b658-n8nn4,Uid:051ad55f-d117-4e55-864a-7db1417bad0a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8284c13c07b9309e23a333d4b1bc4ab5e59f277177359d4f83896c53709c6902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.042402 kubelet[2783]: E0527 03:23:57.041783 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8284c13c07b9309e23a333d4b1bc4ab5e59f277177359d4f83896c53709c6902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.042402 kubelet[2783]: E0527 03:23:57.041850 2783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8284c13c07b9309e23a333d4b1bc4ab5e59f277177359d4f83896c53709c6902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5854b5b658-n8nn4" May 27 03:23:57.042402 kubelet[2783]: E0527 03:23:57.041879 2783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8284c13c07b9309e23a333d4b1bc4ab5e59f277177359d4f83896c53709c6902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5854b5b658-n8nn4" May 27 03:23:57.043302 kubelet[2783]: E0527 03:23:57.041948 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5854b5b658-n8nn4_calico-apiserver(051ad55f-d117-4e55-864a-7db1417bad0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5854b5b658-n8nn4_calico-apiserver(051ad55f-d117-4e55-864a-7db1417bad0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8284c13c07b9309e23a333d4b1bc4ab5e59f277177359d4f83896c53709c6902\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5854b5b658-n8nn4" podUID="051ad55f-d117-4e55-864a-7db1417bad0a" May 27 03:23:57.044027 containerd[1566]: time="2025-05-27T03:23:57.043212631Z" level=error msg="Failed to destroy network for sandbox \"5e40377fe3c1b36efb4e058fbcc8e0414e7aa88dd210fe8edb57d97815139580\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.045155 containerd[1566]: time="2025-05-27T03:23:57.045101294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6df54485ff-xfn7l,Uid:c39f5529-814a-4886-be7d-327a6bc32817,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e40377fe3c1b36efb4e058fbcc8e0414e7aa88dd210fe8edb57d97815139580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.045662 kubelet[2783]: E0527 03:23:57.045484 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e40377fe3c1b36efb4e058fbcc8e0414e7aa88dd210fe8edb57d97815139580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:57.045662 kubelet[2783]: E0527 03:23:57.045546 2783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e40377fe3c1b36efb4e058fbcc8e0414e7aa88dd210fe8edb57d97815139580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6df54485ff-xfn7l" May 27 03:23:57.045662 kubelet[2783]: E0527 03:23:57.045578 2783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e40377fe3c1b36efb4e058fbcc8e0414e7aa88dd210fe8edb57d97815139580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6df54485ff-xfn7l" May 27 03:23:57.045862 kubelet[2783]: E0527 03:23:57.045640 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6df54485ff-xfn7l_calico-system(c39f5529-814a-4886-be7d-327a6bc32817)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6df54485ff-xfn7l_calico-system(c39f5529-814a-4886-be7d-327a6bc32817)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e40377fe3c1b36efb4e058fbcc8e0414e7aa88dd210fe8edb57d97815139580\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6df54485ff-xfn7l" podUID="c39f5529-814a-4886-be7d-327a6bc32817" May 27 03:23:57.396773 kubelet[2783]: E0527 03:23:57.396722 2783 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 27 03:23:57.397304 kubelet[2783]: E0527 03:23:57.396855 2783 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e0ea8984-88dc-48df-a02b-bebd3dda937c-config-volume podName:e0ea8984-88dc-48df-a02b-bebd3dda937c nodeName:}" failed. No retries permitted until 2025-05-27 03:23:57.896825212 +0000 UTC m=+34.360857819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e0ea8984-88dc-48df-a02b-bebd3dda937c-config-volume") pod "coredns-668d6bf9bc-6j5hq" (UID: "e0ea8984-88dc-48df-a02b-bebd3dda937c") : failed to sync configmap cache: timed out waiting for the condition May 27 03:23:57.412592 kubelet[2783]: E0527 03:23:57.412537 2783 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 27 03:23:57.412754 kubelet[2783]: E0527 03:23:57.412641 2783 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/597bcafa-bfd9-4d2c-864e-2f15d6a1d21c-config-volume podName:597bcafa-bfd9-4d2c-864e-2f15d6a1d21c nodeName:}" failed. No retries permitted until 2025-05-27 03:23:57.912616193 +0000 UTC m=+34.376648780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/597bcafa-bfd9-4d2c-864e-2f15d6a1d21c-config-volume") pod "coredns-668d6bf9bc-c5nrz" (UID: "597bcafa-bfd9-4d2c-864e-2f15d6a1d21c") : failed to sync configmap cache: timed out waiting for the condition May 27 03:23:57.979634 containerd[1566]: time="2025-05-27T03:23:57.979564201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6j5hq,Uid:e0ea8984-88dc-48df-a02b-bebd3dda937c,Namespace:kube-system,Attempt:0,}" May 27 03:23:58.124777 containerd[1566]: time="2025-05-27T03:23:58.124700864Z" level=error msg="Failed to destroy network for sandbox \"cd1fa4c42dd2292c6cd9325541b6998a0e57bdbf5ebd6802920f2ad78a563b76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:58.133808 systemd[1]: run-netns-cni\x2d8a8eff24\x2d2612\x2d9c32\x2db69d\x2d5a8cc46f7c6a.mount: Deactivated successfully. May 27 03:23:58.136527 containerd[1566]: time="2025-05-27T03:23:58.136391193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6j5hq,Uid:e0ea8984-88dc-48df-a02b-bebd3dda937c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1fa4c42dd2292c6cd9325541b6998a0e57bdbf5ebd6802920f2ad78a563b76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:58.137257 kubelet[2783]: E0527 03:23:58.137207 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1fa4c42dd2292c6cd9325541b6998a0e57bdbf5ebd6802920f2ad78a563b76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:58.137693 kubelet[2783]: E0527 03:23:58.137292 2783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1fa4c42dd2292c6cd9325541b6998a0e57bdbf5ebd6802920f2ad78a563b76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6j5hq" May 27 03:23:58.137693 kubelet[2783]: E0527 03:23:58.137327 2783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1fa4c42dd2292c6cd9325541b6998a0e57bdbf5ebd6802920f2ad78a563b76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6j5hq" May 27 03:23:58.138417 kubelet[2783]: E0527 03:23:58.137432 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6j5hq_kube-system(e0ea8984-88dc-48df-a02b-bebd3dda937c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6j5hq_kube-system(e0ea8984-88dc-48df-a02b-bebd3dda937c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd1fa4c42dd2292c6cd9325541b6998a0e57bdbf5ebd6802920f2ad78a563b76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6j5hq" podUID="e0ea8984-88dc-48df-a02b-bebd3dda937c" May 27 03:23:58.255919 containerd[1566]: time="2025-05-27T03:23:58.255412326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c5nrz,Uid:597bcafa-bfd9-4d2c-864e-2f15d6a1d21c,Namespace:kube-system,Attempt:0,}" May 27 03:23:58.374293 containerd[1566]: time="2025-05-27T03:23:58.374112438Z" level=error msg="Failed to destroy network for sandbox \"288f07ade7f65565f862c1ac25d81247bf7cd55c2dcc1bce231d817380f3827a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:58.382578 systemd[1]: run-netns-cni\x2d756ff2db\x2d3ec6\x2d7686\x2dc0bd\x2d1dc46ed5bf0e.mount: Deactivated successfully. May 27 03:23:58.384377 containerd[1566]: time="2025-05-27T03:23:58.383417109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c5nrz,Uid:597bcafa-bfd9-4d2c-864e-2f15d6a1d21c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"288f07ade7f65565f862c1ac25d81247bf7cd55c2dcc1bce231d817380f3827a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:58.384573 kubelet[2783]: E0527 03:23:58.383722 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"288f07ade7f65565f862c1ac25d81247bf7cd55c2dcc1bce231d817380f3827a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:58.384573 kubelet[2783]: E0527 03:23:58.383806 2783 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"288f07ade7f65565f862c1ac25d81247bf7cd55c2dcc1bce231d817380f3827a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c5nrz" May 27 03:23:58.384573 kubelet[2783]: E0527 03:23:58.383841 2783 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"288f07ade7f65565f862c1ac25d81247bf7cd55c2dcc1bce231d817380f3827a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c5nrz" May 27 03:23:58.384771 kubelet[2783]: E0527 03:23:58.383903 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c5nrz_kube-system(597bcafa-bfd9-4d2c-864e-2f15d6a1d21c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c5nrz_kube-system(597bcafa-bfd9-4d2c-864e-2f15d6a1d21c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"288f07ade7f65565f862c1ac25d81247bf7cd55c2dcc1bce231d817380f3827a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-c5nrz" podUID="597bcafa-bfd9-4d2c-864e-2f15d6a1d21c" May 27 03:24:03.303066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount903485462.mount: Deactivated successfully. May 27 03:24:03.337418 containerd[1566]: time="2025-05-27T03:24:03.337317267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:03.338750 containerd[1566]: time="2025-05-27T03:24:03.338710276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:24:03.339770 containerd[1566]: time="2025-05-27T03:24:03.339671137Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:03.342435 containerd[1566]: time="2025-05-27T03:24:03.342317443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:03.343305 containerd[1566]: time="2025-05-27T03:24:03.343222886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.435294998s" May 27 03:24:03.343305 containerd[1566]: time="2025-05-27T03:24:03.343269887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:24:03.368181 containerd[1566]: time="2025-05-27T03:24:03.367651241Z" level=info msg="CreateContainer within sandbox \"4f243367801fab813a8767264e2106beb2cf8a59fd51d142e2afb80ca78a83dc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:24:03.381527 containerd[1566]: time="2025-05-27T03:24:03.381481912Z" level=info msg="Container bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:03.394113 containerd[1566]: time="2025-05-27T03:24:03.394047857Z" level=info msg="CreateContainer within sandbox \"4f243367801fab813a8767264e2106beb2cf8a59fd51d142e2afb80ca78a83dc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62\"" May 27 03:24:03.395372 containerd[1566]: time="2025-05-27T03:24:03.394664613Z" level=info msg="StartContainer for \"bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62\"" May 27 03:24:03.396970 containerd[1566]: time="2025-05-27T03:24:03.396926270Z" level=info msg="connecting to shim bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62" address="unix:///run/containerd/s/5a16fc6d7d6675b04c6647da7cb222edc8b196a98e459ac560ac4509fb7ed792" protocol=ttrpc version=3 May 27 03:24:03.425538 systemd[1]: Started cri-containerd-bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62.scope - libcontainer container bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62. May 27 03:24:03.490801 containerd[1566]: time="2025-05-27T03:24:03.490672708Z" level=info msg="StartContainer for \"bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62\" returns successfully" May 27 03:24:03.612784 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:24:03.612973 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:24:03.751326 kubelet[2783]: I0527 03:24:03.751279 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqflz\" (UniqueName: \"kubernetes.io/projected/c39f5529-814a-4886-be7d-327a6bc32817-kube-api-access-dqflz\") pod \"c39f5529-814a-4886-be7d-327a6bc32817\" (UID: \"c39f5529-814a-4886-be7d-327a6bc32817\") " May 27 03:24:03.751873 kubelet[2783]: I0527 03:24:03.751395 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39f5529-814a-4886-be7d-327a6bc32817-whisker-ca-bundle\") pod \"c39f5529-814a-4886-be7d-327a6bc32817\" (UID: \"c39f5529-814a-4886-be7d-327a6bc32817\") " May 27 03:24:03.751873 kubelet[2783]: I0527 03:24:03.751433 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c39f5529-814a-4886-be7d-327a6bc32817-whisker-backend-key-pair\") pod \"c39f5529-814a-4886-be7d-327a6bc32817\" (UID: \"c39f5529-814a-4886-be7d-327a6bc32817\") " May 27 03:24:03.755374 kubelet[2783]: I0527 03:24:03.754691 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c39f5529-814a-4886-be7d-327a6bc32817-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c39f5529-814a-4886-be7d-327a6bc32817" (UID: "c39f5529-814a-4886-be7d-327a6bc32817"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:24:03.760491 kubelet[2783]: I0527 03:24:03.760237 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39f5529-814a-4886-be7d-327a6bc32817-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c39f5529-814a-4886-be7d-327a6bc32817" (UID: "c39f5529-814a-4886-be7d-327a6bc32817"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:24:03.760617 kubelet[2783]: I0527 03:24:03.760519 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39f5529-814a-4886-be7d-327a6bc32817-kube-api-access-dqflz" (OuterVolumeSpecName: "kube-api-access-dqflz") pod "c39f5529-814a-4886-be7d-327a6bc32817" (UID: "c39f5529-814a-4886-be7d-327a6bc32817"). InnerVolumeSpecName "kube-api-access-dqflz". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:24:03.852088 kubelet[2783]: I0527 03:24:03.852020 2783 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dqflz\" (UniqueName: \"kubernetes.io/projected/c39f5529-814a-4886-be7d-327a6bc32817-kube-api-access-dqflz\") on node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" DevicePath \"\"" May 27 03:24:03.852088 kubelet[2783]: I0527 03:24:03.852061 2783 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39f5529-814a-4886-be7d-327a6bc32817-whisker-ca-bundle\") on node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" DevicePath \"\"" May 27 03:24:03.852088 kubelet[2783]: I0527 03:24:03.852082 2783 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c39f5529-814a-4886-be7d-327a6bc32817-whisker-backend-key-pair\") on node \"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal\" DevicePath \"\"" May 27 03:24:03.946427 systemd[1]: Removed slice kubepods-besteffort-podc39f5529_814a_4886_be7d_327a6bc32817.slice - libcontainer container kubepods-besteffort-podc39f5529_814a_4886_be7d_327a6bc32817.slice. May 27 03:24:03.972868 kubelet[2783]: I0527 03:24:03.972679 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rkqsc" podStartSLOduration=1.521352196 podStartE2EDuration="17.972655407s" podCreationTimestamp="2025-05-27 03:23:46 +0000 UTC" firstStartedPulling="2025-05-27 03:23:46.893737164 +0000 UTC m=+23.357769761" lastFinishedPulling="2025-05-27 03:24:03.345040369 +0000 UTC m=+39.809072972" observedRunningTime="2025-05-27 03:24:03.970288176 +0000 UTC m=+40.434320787" watchObservedRunningTime="2025-05-27 03:24:03.972655407 +0000 UTC m=+40.436688016" May 27 03:24:04.093321 systemd[1]: Created slice kubepods-besteffort-pod58f6d86c_3e78_41cf_91c0_b624abdbabc6.slice - libcontainer container kubepods-besteffort-pod58f6d86c_3e78_41cf_91c0_b624abdbabc6.slice. May 27 03:24:04.155363 kubelet[2783]: I0527 03:24:04.155286 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58f6d86c-3e78-41cf-91c0-b624abdbabc6-whisker-backend-key-pair\") pod \"whisker-5885f54b6c-n2m2n\" (UID: \"58f6d86c-3e78-41cf-91c0-b624abdbabc6\") " pod="calico-system/whisker-5885f54b6c-n2m2n" May 27 03:24:04.156513 kubelet[2783]: I0527 03:24:04.156477 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f6d86c-3e78-41cf-91c0-b624abdbabc6-whisker-ca-bundle\") pod \"whisker-5885f54b6c-n2m2n\" (UID: \"58f6d86c-3e78-41cf-91c0-b624abdbabc6\") " pod="calico-system/whisker-5885f54b6c-n2m2n" May 27 03:24:04.156783 kubelet[2783]: I0527 03:24:04.156717 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44mtj\" (UniqueName: \"kubernetes.io/projected/58f6d86c-3e78-41cf-91c0-b624abdbabc6-kube-api-access-44mtj\") pod \"whisker-5885f54b6c-n2m2n\" (UID: \"58f6d86c-3e78-41cf-91c0-b624abdbabc6\") " pod="calico-system/whisker-5885f54b6c-n2m2n" May 27 03:24:04.205629 containerd[1566]: time="2025-05-27T03:24:04.205548155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62\" id:\"7ebf890fecbb1a2b7cac3546cb026e2f131493fe04b51d4a79ab83179ae1ba74\" pid:3865 exit_status:1 exited_at:{seconds:1748316244 nanos:205123298}" May 27 03:24:04.304686 systemd[1]: var-lib-kubelet-pods-c39f5529\x2d814a\x2d4886\x2dbe7d\x2d327a6bc32817-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddqflz.mount: Deactivated successfully. May 27 03:24:04.304841 systemd[1]: var-lib-kubelet-pods-c39f5529\x2d814a\x2d4886\x2dbe7d\x2d327a6bc32817-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:24:04.401814 containerd[1566]: time="2025-05-27T03:24:04.401758005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5885f54b6c-n2m2n,Uid:58f6d86c-3e78-41cf-91c0-b624abdbabc6,Namespace:calico-system,Attempt:0,}" May 27 03:24:04.544666 systemd-networkd[1450]: calif44937ac65e: Link UP May 27 03:24:04.546563 systemd-networkd[1450]: calif44937ac65e: Gained carrier May 27 03:24:04.568414 containerd[1566]: 2025-05-27 03:24:04.442 [INFO][3892] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:24:04.568414 containerd[1566]: 2025-05-27 03:24:04.454 [INFO][3892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0 whisker-5885f54b6c- calico-system 58f6d86c-3e78-41cf-91c0-b624abdbabc6 878 0 2025-05-27 03:24:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5885f54b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal whisker-5885f54b6c-n2m2n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif44937ac65e [] [] }} ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Namespace="calico-system" Pod="whisker-5885f54b6c-n2m2n" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-" May 27 03:24:04.568414 containerd[1566]: 2025-05-27 03:24:04.455 [INFO][3892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Namespace="calico-system" Pod="whisker-5885f54b6c-n2m2n" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0" May 27 03:24:04.568414 containerd[1566]: 2025-05-27 03:24:04.488 [INFO][3904] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" HandleID="k8s-pod-network.a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0" May 27 03:24:04.568791 containerd[1566]: 2025-05-27 03:24:04.489 [INFO][3904] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" HandleID="k8s-pod-network.a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002332a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", "pod":"whisker-5885f54b6c-n2m2n", "timestamp":"2025-05-27 03:24:04.488939688 +0000 UTC"}, Hostname:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:04.568791 containerd[1566]: 2025-05-27 03:24:04.489 [INFO][3904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:04.568791 containerd[1566]: 2025-05-27 03:24:04.489 [INFO][3904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:04.568791 containerd[1566]: 2025-05-27 03:24:04.489 [INFO][3904] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal' May 27 03:24:04.568791 containerd[1566]: 2025-05-27 03:24:04.497 [INFO][3904] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:04.568791 containerd[1566]: 2025-05-27 03:24:04.504 [INFO][3904] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:04.568791 containerd[1566]: 2025-05-27 03:24:04.510 [INFO][3904] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:04.568791 containerd[1566]: 2025-05-27 03:24:04.512 [INFO][3904] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:04.569234 containerd[1566]: 2025-05-27 03:24:04.514 [INFO][3904] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:04.569234 containerd[1566]: 2025-05-27 03:24:04.514 [INFO][3904] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:04.569234 containerd[1566]: 2025-05-27 03:24:04.517 [INFO][3904] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd May 27 03:24:04.569234 containerd[1566]: 2025-05-27 03:24:04.522 [INFO][3904] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:04.569234 containerd[1566]: 2025-05-27 03:24:04.528 [INFO][3904] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.193/26] block=192.168.71.192/26 handle="k8s-pod-network.a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:04.569234 containerd[1566]: 2025-05-27 03:24:04.528 [INFO][3904] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.193/26] handle="k8s-pod-network.a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:04.569234 containerd[1566]: 2025-05-27 03:24:04.529 [INFO][3904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:04.569234 containerd[1566]: 2025-05-27 03:24:04.529 [INFO][3904] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.193/26] IPv6=[] ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" HandleID="k8s-pod-network.a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0" May 27 03:24:04.569849 containerd[1566]: 2025-05-27 03:24:04.532 [INFO][3892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Namespace="calico-system" Pod="whisker-5885f54b6c-n2m2n" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0", GenerateName:"whisker-5885f54b6c-", Namespace:"calico-system", SelfLink:"", UID:"58f6d86c-3e78-41cf-91c0-b624abdbabc6", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5885f54b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-5885f54b6c-n2m2n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif44937ac65e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:04.569976 containerd[1566]: 2025-05-27 03:24:04.532 [INFO][3892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.193/32] ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Namespace="calico-system" Pod="whisker-5885f54b6c-n2m2n" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0" May 27 03:24:04.569976 containerd[1566]: 2025-05-27 03:24:04.532 [INFO][3892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif44937ac65e ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Namespace="calico-system" Pod="whisker-5885f54b6c-n2m2n" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0" May 27 03:24:04.569976 containerd[1566]: 2025-05-27 03:24:04.547 [INFO][3892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Namespace="calico-system" Pod="whisker-5885f54b6c-n2m2n" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0" May 27 03:24:04.570163 containerd[1566]: 2025-05-27 03:24:04.548 [INFO][3892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Namespace="calico-system" Pod="whisker-5885f54b6c-n2m2n" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0", GenerateName:"whisker-5885f54b6c-", Namespace:"calico-system", SelfLink:"", UID:"58f6d86c-3e78-41cf-91c0-b624abdbabc6", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5885f54b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd", Pod:"whisker-5885f54b6c-n2m2n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif44937ac65e", MAC:"8a:30:3f:af:ce:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:04.570286 containerd[1566]: 2025-05-27 03:24:04.565 [INFO][3892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" Namespace="calico-system" Pod="whisker-5885f54b6c-n2m2n" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-whisker--5885f54b6c--n2m2n-eth0" May 27 03:24:04.598384 containerd[1566]: time="2025-05-27T03:24:04.598301825Z" level=info msg="connecting to shim a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd" address="unix:///run/containerd/s/73ffaf5846a0ba76a6fb5f899e9e22cfaa2fedcf0a174d8a3d0e0fb50b8284de" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:04.638591 systemd[1]: Started cri-containerd-a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd.scope - libcontainer container a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd. May 27 03:24:04.707715 containerd[1566]: time="2025-05-27T03:24:04.707665317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5885f54b6c-n2m2n,Uid:58f6d86c-3e78-41cf-91c0-b624abdbabc6,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5e752a523c7f026702f45e218876e5901f20cd99ef8c503f0df09ef251678dd\"" May 27 03:24:04.710399 containerd[1566]: time="2025-05-27T03:24:04.710323245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:04.835023 containerd[1566]: time="2025-05-27T03:24:04.834793524Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:04.836550 containerd[1566]: time="2025-05-27T03:24:04.836496170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:04.836696 containerd[1566]: time="2025-05-27T03:24:04.836544073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:04.836874 kubelet[2783]: E0527 03:24:04.836806 2783 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:04.837778 kubelet[2783]: E0527 03:24:04.836875 2783 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:04.837847 kubelet[2783]: E0527 03:24:04.837086 2783 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ce6e86134e5b48b79055a94e0b2c2b01,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44mtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5885f54b6c-n2m2n_calico-system(58f6d86c-3e78-41cf-91c0-b624abdbabc6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:04.839813 containerd[1566]: time="2025-05-27T03:24:04.839781695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:04.961374 containerd[1566]: time="2025-05-27T03:24:04.960836513Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:04.962626 containerd[1566]: time="2025-05-27T03:24:04.962575329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:04.962876 containerd[1566]: time="2025-05-27T03:24:04.962736248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:04.963369 kubelet[2783]: E0527 03:24:04.963226 2783 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:04.963504 kubelet[2783]: E0527 03:24:04.963332 2783 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:04.964557 kubelet[2783]: E0527 03:24:04.964375 2783 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44mtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5885f54b6c-n2m2n_calico-system(58f6d86c-3e78-41cf-91c0-b624abdbabc6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:04.965989 kubelet[2783]: E0527 03:24:04.965717 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5885f54b6c-n2m2n" podUID="58f6d86c-3e78-41cf-91c0-b624abdbabc6" May 27 03:24:05.032253 containerd[1566]: time="2025-05-27T03:24:05.032134058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62\" id:\"393d3eff7c6b4c8b9cd52eae9269177d92275ae10214288f64bc7ae498d3f118\" pid:3975 exit_status:1 exited_at:{seconds:1748316245 nanos:31764450}" May 27 03:24:05.725097 kubelet[2783]: I0527 03:24:05.725044 2783 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39f5529-814a-4886-be7d-327a6bc32817" path="/var/lib/kubelet/pods/c39f5529-814a-4886-be7d-327a6bc32817/volumes" May 27 03:24:05.869543 systemd-networkd[1450]: calif44937ac65e: Gained IPv6LL May 27 03:24:05.940960 kubelet[2783]: E0527 03:24:05.940897 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5885f54b6c-n2m2n" podUID="58f6d86c-3e78-41cf-91c0-b624abdbabc6" May 27 03:24:06.053356 systemd-networkd[1450]: vxlan.calico: Link UP May 27 03:24:06.053371 systemd-networkd[1450]: vxlan.calico: Gained carrier May 27 03:24:07.342051 systemd-networkd[1450]: vxlan.calico: Gained IPv6LL May 27 03:24:07.720357 containerd[1566]: time="2025-05-27T03:24:07.720281593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5854b5b658-n8nn4,Uid:051ad55f-d117-4e55-864a-7db1417bad0a,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:07.722407 containerd[1566]: time="2025-05-27T03:24:07.722057617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5854b5b658-fprlv,Uid:40f80b17-095e-4372-a4ff-219cc92c5c20,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:07.918803 systemd-networkd[1450]: cali5fba3258fde: Link UP May 27 03:24:07.920332 systemd-networkd[1450]: cali5fba3258fde: Gained carrier May 27 03:24:07.951302 containerd[1566]: 2025-05-27 03:24:07.809 [INFO][4186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0 calico-apiserver-5854b5b658- calico-apiserver 051ad55f-d117-4e55-864a-7db1417bad0a 812 0 2025-05-27 03:23:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5854b5b658 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal calico-apiserver-5854b5b658-n8nn4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5fba3258fde [] [] }} ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-n8nn4" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-" May 27 03:24:07.951302 containerd[1566]: 2025-05-27 03:24:07.809 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-n8nn4" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0" May 27 03:24:07.951302 containerd[1566]: 2025-05-27 03:24:07.866 [INFO][4210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" HandleID="k8s-pod-network.6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0" May 27 03:24:07.951639 containerd[1566]: 2025-05-27 03:24:07.866 [INFO][4210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" HandleID="k8s-pod-network.6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3c70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", "pod":"calico-apiserver-5854b5b658-n8nn4", "timestamp":"2025-05-27 03:24:07.866055328 +0000 UTC"}, Hostname:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:07.951639 containerd[1566]: 2025-05-27 03:24:07.866 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:07.951639 containerd[1566]: 2025-05-27 03:24:07.866 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:07.951639 containerd[1566]: 2025-05-27 03:24:07.866 [INFO][4210] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal' May 27 03:24:07.951639 containerd[1566]: 2025-05-27 03:24:07.876 [INFO][4210] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:07.951639 containerd[1566]: 2025-05-27 03:24:07.881 [INFO][4210] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:07.951639 containerd[1566]: 2025-05-27 03:24:07.886 [INFO][4210] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:07.951639 containerd[1566]: 2025-05-27 03:24:07.888 [INFO][4210] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:07.952092 containerd[1566]: 2025-05-27 03:24:07.891 [INFO][4210] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:07.952092 containerd[1566]: 2025-05-27 03:24:07.891 [INFO][4210] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:07.952092 containerd[1566]: 2025-05-27 03:24:07.893 [INFO][4210] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02 May 27 03:24:07.952092 containerd[1566]: 2025-05-27 03:24:07.897 [INFO][4210] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:07.952092 containerd[1566]: 2025-05-27 03:24:07.907 [INFO][4210] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.194/26] block=192.168.71.192/26 handle="k8s-pod-network.6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:07.952092 containerd[1566]: 2025-05-27 03:24:07.907 [INFO][4210] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.194/26] handle="k8s-pod-network.6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:07.952092 containerd[1566]: 2025-05-27 03:24:07.907 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:07.952092 containerd[1566]: 2025-05-27 03:24:07.907 [INFO][4210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.194/26] IPv6=[] ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" HandleID="k8s-pod-network.6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0" May 27 03:24:07.953983 containerd[1566]: 2025-05-27 03:24:07.911 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-n8nn4" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0", GenerateName:"calico-apiserver-5854b5b658-", Namespace:"calico-apiserver", SelfLink:"", UID:"051ad55f-d117-4e55-864a-7db1417bad0a", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5854b5b658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-5854b5b658-n8nn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5fba3258fde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:07.954116 containerd[1566]: 2025-05-27 03:24:07.912 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.194/32] ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-n8nn4" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0" May 27 03:24:07.954116 containerd[1566]: 2025-05-27 03:24:07.912 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fba3258fde ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-n8nn4" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0" May 27 03:24:07.954116 containerd[1566]: 2025-05-27 03:24:07.923 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-n8nn4" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0" May 27 03:24:07.955404 containerd[1566]: 2025-05-27 03:24:07.924 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-n8nn4" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0", GenerateName:"calico-apiserver-5854b5b658-", Namespace:"calico-apiserver", SelfLink:"", UID:"051ad55f-d117-4e55-864a-7db1417bad0a", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5854b5b658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02", Pod:"calico-apiserver-5854b5b658-n8nn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5fba3258fde", MAC:"d6:ab:76:5f:c1:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:07.955404 containerd[1566]: 2025-05-27 03:24:07.946 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-n8nn4" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--n8nn4-eth0" May 27 03:24:08.006105 containerd[1566]: time="2025-05-27T03:24:08.005956475Z" level=info msg="connecting to shim 6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02" address="unix:///run/containerd/s/312291263347be8e4aebeeae2b0561e7fb9c44c83cb79f662ae3f85ac48b5334" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:08.098576 systemd[1]: Started cri-containerd-6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02.scope - libcontainer container 6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02. May 27 03:24:08.129195 systemd-networkd[1450]: cali997e5acccca: Link UP May 27 03:24:08.131883 systemd-networkd[1450]: cali997e5acccca: Gained carrier May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:07.818 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0 calico-apiserver-5854b5b658- calico-apiserver 40f80b17-095e-4372-a4ff-219cc92c5c20 811 0 2025-05-27 03:23:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5854b5b658 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal calico-apiserver-5854b5b658-fprlv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali997e5acccca [] [] }} ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-fprlv" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:07.818 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-fprlv" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:07.866 [INFO][4215] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" HandleID="k8s-pod-network.6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:07.867 [INFO][4215] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" HandleID="k8s-pod-network.6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d97f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", "pod":"calico-apiserver-5854b5b658-fprlv", "timestamp":"2025-05-27 03:24:07.86678972 +0000 UTC"}, Hostname:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:07.867 [INFO][4215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:07.907 [INFO][4215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:07.907 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal' May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:07.996 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.020 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.056 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.066 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.072 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.073 [INFO][4215] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.076 [INFO][4215] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.103 [INFO][4215] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.117 [INFO][4215] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.195/26] block=192.168.71.192/26 handle="k8s-pod-network.6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.117 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.195/26] handle="k8s-pod-network.6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.117 [INFO][4215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:08.167933 containerd[1566]: 2025-05-27 03:24:08.118 [INFO][4215] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.195/26] IPv6=[] ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" HandleID="k8s-pod-network.6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0" May 27 03:24:08.170157 containerd[1566]: 2025-05-27 03:24:08.121 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-fprlv" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0", GenerateName:"calico-apiserver-5854b5b658-", Namespace:"calico-apiserver", SelfLink:"", UID:"40f80b17-095e-4372-a4ff-219cc92c5c20", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5854b5b658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-5854b5b658-fprlv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali997e5acccca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:08.170157 containerd[1566]: 2025-05-27 03:24:08.122 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.195/32] ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-fprlv" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0" May 27 03:24:08.170157 containerd[1566]: 2025-05-27 03:24:08.122 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali997e5acccca ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-fprlv" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0" May 27 03:24:08.170157 containerd[1566]: 2025-05-27 03:24:08.132 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-fprlv" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0" May 27 03:24:08.170157 containerd[1566]: 2025-05-27 03:24:08.133 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-fprlv" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0", GenerateName:"calico-apiserver-5854b5b658-", Namespace:"calico-apiserver", SelfLink:"", UID:"40f80b17-095e-4372-a4ff-219cc92c5c20", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5854b5b658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e", Pod:"calico-apiserver-5854b5b658-fprlv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali997e5acccca", MAC:"9e:c0:f3:c8:3f:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:08.170157 containerd[1566]: 2025-05-27 03:24:08.162 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" Namespace="calico-apiserver" Pod="calico-apiserver-5854b5b658-fprlv" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--apiserver--5854b5b658--fprlv-eth0" May 27 03:24:08.220139 containerd[1566]: time="2025-05-27T03:24:08.219997331Z" level=info msg="connecting to shim 6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e" address="unix:///run/containerd/s/8bcfca582bba83b47337717e45dd027bce72f9feeaf5b6a69d17fc0618286efb" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:08.283878 systemd[1]: Started cri-containerd-6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e.scope - libcontainer container 6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e. May 27 03:24:08.348437 containerd[1566]: time="2025-05-27T03:24:08.348377330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5854b5b658-n8nn4,Uid:051ad55f-d117-4e55-864a-7db1417bad0a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02\"" May 27 03:24:08.352698 containerd[1566]: time="2025-05-27T03:24:08.352644500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:24:08.393169 containerd[1566]: time="2025-05-27T03:24:08.392952301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5854b5b658-fprlv,Uid:40f80b17-095e-4372-a4ff-219cc92c5c20,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e\"" May 27 03:24:08.720413 containerd[1566]: time="2025-05-27T03:24:08.720358610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-526wj,Uid:fece5a2d-649d-4220-838d-a0db5be7df49,Namespace:calico-system,Attempt:0,}" May 27 03:24:08.870061 systemd-networkd[1450]: calia70e5048509: Link UP May 27 03:24:08.872968 systemd-networkd[1450]: calia70e5048509: Gained carrier May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.777 [INFO][4331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0 goldmane-78d55f7ddc- calico-system fece5a2d-649d-4220-838d-a0db5be7df49 814 0 2025-05-27 03:23:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal goldmane-78d55f7ddc-526wj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia70e5048509 [] [] }} ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Namespace="calico-system" Pod="goldmane-78d55f7ddc-526wj" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.777 [INFO][4331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Namespace="calico-system" Pod="goldmane-78d55f7ddc-526wj" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.816 [INFO][4343] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" HandleID="k8s-pod-network.5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.816 [INFO][4343] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" HandleID="k8s-pod-network.5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d99a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", "pod":"goldmane-78d55f7ddc-526wj", "timestamp":"2025-05-27 03:24:08.816454542 +0000 UTC"}, Hostname:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.816 [INFO][4343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.816 [INFO][4343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.816 [INFO][4343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal' May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.829 [INFO][4343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.835 [INFO][4343] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.840 [INFO][4343] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.842 [INFO][4343] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.845 [INFO][4343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.845 [INFO][4343] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.847 [INFO][4343] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.853 [INFO][4343] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.862 [INFO][4343] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.196/26] block=192.168.71.192/26 handle="k8s-pod-network.5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.862 [INFO][4343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.196/26] handle="k8s-pod-network.5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.863 [INFO][4343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:08.899179 containerd[1566]: 2025-05-27 03:24:08.863 [INFO][4343] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.196/26] IPv6=[] ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" HandleID="k8s-pod-network.5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0" May 27 03:24:08.901145 containerd[1566]: 2025-05-27 03:24:08.866 [INFO][4331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Namespace="calico-system" Pod="goldmane-78d55f7ddc-526wj" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"fece5a2d-649d-4220-838d-a0db5be7df49", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-78d55f7ddc-526wj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia70e5048509", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:08.901145 containerd[1566]: 2025-05-27 03:24:08.866 [INFO][4331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.196/32] ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Namespace="calico-system" Pod="goldmane-78d55f7ddc-526wj" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0" May 27 03:24:08.901145 containerd[1566]: 2025-05-27 03:24:08.866 [INFO][4331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia70e5048509 ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Namespace="calico-system" Pod="goldmane-78d55f7ddc-526wj" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0" May 27 03:24:08.901145 containerd[1566]: 2025-05-27 03:24:08.871 [INFO][4331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Namespace="calico-system" Pod="goldmane-78d55f7ddc-526wj" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0" May 27 03:24:08.901145 containerd[1566]: 2025-05-27 03:24:08.874 [INFO][4331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Namespace="calico-system" Pod="goldmane-78d55f7ddc-526wj" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"fece5a2d-649d-4220-838d-a0db5be7df49", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c", Pod:"goldmane-78d55f7ddc-526wj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia70e5048509", MAC:"fa:33:99:ff:6b:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:08.901145 containerd[1566]: 2025-05-27 03:24:08.895 [INFO][4331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" Namespace="calico-system" Pod="goldmane-78d55f7ddc-526wj" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-goldmane--78d55f7ddc--526wj-eth0" May 27 03:24:08.944537 containerd[1566]: time="2025-05-27T03:24:08.944430481Z" level=info msg="connecting to shim 5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c" address="unix:///run/containerd/s/5a6ff8d354364484bc1f7dc3f9b3b0f811b26378b3529840a303a9e5e26fc765" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:08.993644 systemd[1]: Started cri-containerd-5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c.scope - libcontainer container 5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c. May 27 03:24:09.063076 containerd[1566]: time="2025-05-27T03:24:09.062958534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-526wj,Uid:fece5a2d-649d-4220-838d-a0db5be7df49,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a8a583a1dc5984c846bba48bb0e348d39165e800cc198f45762482841dc840c\"" May 27 03:24:09.133749 systemd-networkd[1450]: cali5fba3258fde: Gained IPv6LL May 27 03:24:09.647642 systemd-networkd[1450]: cali997e5acccca: Gained IPv6LL May 27 03:24:09.720491 containerd[1566]: time="2025-05-27T03:24:09.720005373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r5t8w,Uid:dca63bae-90e6-4118-afb8-b162c7c3ea6d,Namespace:calico-system,Attempt:0,}" May 27 03:24:09.723138 containerd[1566]: time="2025-05-27T03:24:09.723095172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c5nrz,Uid:597bcafa-bfd9-4d2c-864e-2f15d6a1d21c,Namespace:kube-system,Attempt:0,}" May 27 03:24:09.723379 containerd[1566]: time="2025-05-27T03:24:09.723329736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-865759b8fd-5m84j,Uid:662bc28a-aa93-4b26-9e56-f8dfc5b781cd,Namespace:calico-system,Attempt:0,}" May 27 03:24:10.047947 systemd-networkd[1450]: cali9ef92e2b045: Link UP May 27 03:24:10.050062 systemd-networkd[1450]: cali9ef92e2b045: Gained carrier May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.831 [INFO][4409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0 coredns-668d6bf9bc- kube-system 597bcafa-bfd9-4d2c-864e-2f15d6a1d21c 810 0 2025-05-27 03:23:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal coredns-668d6bf9bc-c5nrz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ef92e2b045 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5nrz" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.832 [INFO][4409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5nrz" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.932 [INFO][4446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" HandleID="k8s-pod-network.79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.933 [INFO][4446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" HandleID="k8s-pod-network.79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac360), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-c5nrz", "timestamp":"2025-05-27 03:24:09.932974466 +0000 UTC"}, Hostname:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.933 [INFO][4446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.933 [INFO][4446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.933 [INFO][4446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal' May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.950 [INFO][4446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.961 [INFO][4446] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.976 [INFO][4446] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.982 [INFO][4446] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.986 [INFO][4446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.987 [INFO][4446] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:09.989 [INFO][4446] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:10.000 [INFO][4446] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:10.019 [INFO][4446] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.197/26] block=192.168.71.192/26 handle="k8s-pod-network.79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:10.020 [INFO][4446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.197/26] handle="k8s-pod-network.79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:10.021 [INFO][4446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:10.102859 containerd[1566]: 2025-05-27 03:24:10.022 [INFO][4446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.197/26] IPv6=[] ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" HandleID="k8s-pod-network.79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0" May 27 03:24:10.105727 containerd[1566]: 2025-05-27 03:24:10.036 [INFO][4409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5nrz" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"597bcafa-bfd9-4d2c-864e-2f15d6a1d21c", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-c5nrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ef92e2b045", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:10.105727 containerd[1566]: 2025-05-27 03:24:10.042 [INFO][4409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.197/32] ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5nrz" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0" May 27 03:24:10.105727 containerd[1566]: 2025-05-27 03:24:10.042 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ef92e2b045 ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5nrz" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0" May 27 03:24:10.105727 containerd[1566]: 2025-05-27 03:24:10.052 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5nrz" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0" May 27 03:24:10.105727 containerd[1566]: 2025-05-27 03:24:10.054 [INFO][4409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5nrz" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"597bcafa-bfd9-4d2c-864e-2f15d6a1d21c", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b", Pod:"coredns-668d6bf9bc-c5nrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ef92e2b045", MAC:"fe:f0:8e:51:96:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:10.105727 containerd[1566]: 2025-05-27 03:24:10.082 [INFO][4409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5nrz" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--c5nrz-eth0" May 27 03:24:10.209367 containerd[1566]: time="2025-05-27T03:24:10.207777010Z" level=info msg="connecting to shim 79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b" address="unix:///run/containerd/s/8bfb5a26600cb517c7c7729d8b4ea9be1fa1a16b93421f12ffb39bf0f8759bb5" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:10.234257 systemd-networkd[1450]: cali294a337e5b4: Link UP May 27 03:24:10.259394 systemd-networkd[1450]: cali294a337e5b4: Gained carrier May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:09.932 [INFO][4433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0 calico-kube-controllers-865759b8fd- calico-system 662bc28a-aa93-4b26-9e56-f8dfc5b781cd 813 0 2025-05-27 03:23:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:865759b8fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal calico-kube-controllers-865759b8fd-5m84j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali294a337e5b4 [] [] }} ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Namespace="calico-system" Pod="calico-kube-controllers-865759b8fd-5m84j" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:09.932 [INFO][4433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Namespace="calico-system" Pod="calico-kube-controllers-865759b8fd-5m84j" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.056 [INFO][4465] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" HandleID="k8s-pod-network.88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.073 [INFO][4465] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" HandleID="k8s-pod-network.88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", "pod":"calico-kube-controllers-865759b8fd-5m84j", "timestamp":"2025-05-27 03:24:10.056220829 +0000 UTC"}, Hostname:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.076 [INFO][4465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.078 [INFO][4465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.078 [INFO][4465] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal' May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.117 [INFO][4465] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.137 [INFO][4465] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.153 [INFO][4465] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.159 [INFO][4465] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.166 [INFO][4465] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.166 [INFO][4465] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.171 [INFO][4465] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.188 [INFO][4465] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.208 [INFO][4465] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.198/26] block=192.168.71.192/26 handle="k8s-pod-network.88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.210 [INFO][4465] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.198/26] handle="k8s-pod-network.88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.212 [INFO][4465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:10.304013 containerd[1566]: 2025-05-27 03:24:10.212 [INFO][4465] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.198/26] IPv6=[] ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" HandleID="k8s-pod-network.88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0" May 27 03:24:10.306414 containerd[1566]: 2025-05-27 03:24:10.218 [INFO][4433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Namespace="calico-system" Pod="calico-kube-controllers-865759b8fd-5m84j" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0", GenerateName:"calico-kube-controllers-865759b8fd-", Namespace:"calico-system", SelfLink:"", UID:"662bc28a-aa93-4b26-9e56-f8dfc5b781cd", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"865759b8fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-865759b8fd-5m84j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali294a337e5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:10.306414 containerd[1566]: 2025-05-27 03:24:10.218 [INFO][4433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.198/32] ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Namespace="calico-system" Pod="calico-kube-controllers-865759b8fd-5m84j" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0" May 27 03:24:10.306414 containerd[1566]: 2025-05-27 03:24:10.218 [INFO][4433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali294a337e5b4 ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Namespace="calico-system" Pod="calico-kube-controllers-865759b8fd-5m84j" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0" May 27 03:24:10.306414 containerd[1566]: 2025-05-27 03:24:10.256 [INFO][4433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Namespace="calico-system" Pod="calico-kube-controllers-865759b8fd-5m84j" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0" May 27 03:24:10.306414 containerd[1566]: 2025-05-27 03:24:10.258 [INFO][4433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Namespace="calico-system" Pod="calico-kube-controllers-865759b8fd-5m84j" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0", GenerateName:"calico-kube-controllers-865759b8fd-", Namespace:"calico-system", SelfLink:"", UID:"662bc28a-aa93-4b26-9e56-f8dfc5b781cd", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"865759b8fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea", Pod:"calico-kube-controllers-865759b8fd-5m84j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali294a337e5b4", MAC:"de:2a:ba:d7:1f:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:10.306414 containerd[1566]: 2025-05-27 03:24:10.289 [INFO][4433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" Namespace="calico-system" Pod="calico-kube-controllers-865759b8fd-5m84j" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-calico--kube--controllers--865759b8fd--5m84j-eth0" May 27 03:24:10.327609 systemd[1]: Started cri-containerd-79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b.scope - libcontainer container 79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b. May 27 03:24:10.377277 systemd-networkd[1450]: calibf21ca10672: Link UP May 27 03:24:10.377684 systemd-networkd[1450]: calibf21ca10672: Gained carrier May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:09.909 [INFO][4408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0 csi-node-driver- calico-system dca63bae-90e6-4118-afb8-b162c7c3ea6d 696 0 2025-05-27 03:23:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal csi-node-driver-r5t8w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibf21ca10672 [] [] }} ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Namespace="calico-system" Pod="csi-node-driver-r5t8w" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:09.909 [INFO][4408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Namespace="calico-system" Pod="csi-node-driver-r5t8w" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.117 [INFO][4458] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" HandleID="k8s-pod-network.9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.117 [INFO][4458] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" HandleID="k8s-pod-network.9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", "pod":"csi-node-driver-r5t8w", "timestamp":"2025-05-27 03:24:10.116929887 +0000 UTC"}, Hostname:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.117 [INFO][4458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.212 [INFO][4458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.212 [INFO][4458] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal' May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.245 [INFO][4458] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.267 [INFO][4458] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.285 [INFO][4458] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.292 [INFO][4458] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.303 [INFO][4458] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.303 [INFO][4458] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.312 [INFO][4458] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.323 [INFO][4458] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.347 [INFO][4458] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.199/26] block=192.168.71.192/26 handle="k8s-pod-network.9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.347 [INFO][4458] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.199/26] handle="k8s-pod-network.9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.347 [INFO][4458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:10.450284 containerd[1566]: 2025-05-27 03:24:10.347 [INFO][4458] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.199/26] IPv6=[] ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" HandleID="k8s-pod-network.9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0" May 27 03:24:10.451436 containerd[1566]: 2025-05-27 03:24:10.351 [INFO][4408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Namespace="calico-system" Pod="csi-node-driver-r5t8w" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dca63bae-90e6-4118-afb8-b162c7c3ea6d", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-r5t8w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf21ca10672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:10.451436 containerd[1566]: 2025-05-27 03:24:10.351 [INFO][4408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.199/32] ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Namespace="calico-system" Pod="csi-node-driver-r5t8w" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0" May 27 03:24:10.451436 containerd[1566]: 2025-05-27 03:24:10.351 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf21ca10672 ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Namespace="calico-system" Pod="csi-node-driver-r5t8w" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0" May 27 03:24:10.451436 containerd[1566]: 2025-05-27 03:24:10.385 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Namespace="calico-system" Pod="csi-node-driver-r5t8w" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0" May 27 03:24:10.451436 containerd[1566]: 2025-05-27 03:24:10.391 [INFO][4408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Namespace="calico-system" Pod="csi-node-driver-r5t8w" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dca63bae-90e6-4118-afb8-b162c7c3ea6d", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a", Pod:"csi-node-driver-r5t8w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf21ca10672", MAC:"3a:ee:ac:e0:39:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:10.451436 containerd[1566]: 2025-05-27 03:24:10.421 [INFO][4408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" Namespace="calico-system" Pod="csi-node-driver-r5t8w" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-csi--node--driver--r5t8w-eth0" May 27 03:24:10.510728 containerd[1566]: time="2025-05-27T03:24:10.510622435Z" level=info msg="connecting to shim 9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a" address="unix:///run/containerd/s/4fadd8e81198fd8b4a5e5498a164a8f98f20332f251d71021dabac24d6283cb7" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:10.516650 containerd[1566]: time="2025-05-27T03:24:10.516596500Z" level=info msg="connecting to shim 88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea" address="unix:///run/containerd/s/529c8f7d7f31c57023109f81d2c802dd019d570f99eb0b9467ae098579d6de19" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:10.549704 containerd[1566]: time="2025-05-27T03:24:10.549536017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c5nrz,Uid:597bcafa-bfd9-4d2c-864e-2f15d6a1d21c,Namespace:kube-system,Attempt:0,} returns sandbox id \"79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b\"" May 27 03:24:10.561573 containerd[1566]: time="2025-05-27T03:24:10.559464196Z" level=info msg="CreateContainer within sandbox \"79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:24:10.584076 containerd[1566]: time="2025-05-27T03:24:10.584023769Z" level=info msg="Container c10b8b64682d742a9911e05bbba35585417c4ebbda365ae9dc54fd4aa0a896ed: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:10.599012 containerd[1566]: time="2025-05-27T03:24:10.598965025Z" level=info msg="CreateContainer within sandbox \"79395b62d959c6f708d33b80ecd9b6aef6053fe4f6ff5677093eaa0afbe0ac5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c10b8b64682d742a9911e05bbba35585417c4ebbda365ae9dc54fd4aa0a896ed\"" May 27 03:24:10.600410 containerd[1566]: time="2025-05-27T03:24:10.600377950Z" level=info msg="StartContainer for \"c10b8b64682d742a9911e05bbba35585417c4ebbda365ae9dc54fd4aa0a896ed\"" May 27 03:24:10.602634 containerd[1566]: time="2025-05-27T03:24:10.602571155Z" level=info msg="connecting to shim c10b8b64682d742a9911e05bbba35585417c4ebbda365ae9dc54fd4aa0a896ed" address="unix:///run/containerd/s/8bfb5a26600cb517c7c7729d8b4ea9be1fa1a16b93421f12ffb39bf0f8759bb5" protocol=ttrpc version=3 May 27 03:24:10.607649 systemd[1]: Started cri-containerd-9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a.scope - libcontainer container 9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a. May 27 03:24:10.623841 systemd[1]: Started cri-containerd-88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea.scope - libcontainer container 88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea. May 27 03:24:10.676576 systemd[1]: Started cri-containerd-c10b8b64682d742a9911e05bbba35585417c4ebbda365ae9dc54fd4aa0a896ed.scope - libcontainer container c10b8b64682d742a9911e05bbba35585417c4ebbda365ae9dc54fd4aa0a896ed. May 27 03:24:10.721237 containerd[1566]: time="2025-05-27T03:24:10.721164248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6j5hq,Uid:e0ea8984-88dc-48df-a02b-bebd3dda937c,Namespace:kube-system,Attempt:0,}" May 27 03:24:10.797655 systemd-networkd[1450]: calia70e5048509: Gained IPv6LL May 27 03:24:10.826175 containerd[1566]: time="2025-05-27T03:24:10.825962804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r5t8w,Uid:dca63bae-90e6-4118-afb8-b162c7c3ea6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a\"" May 27 03:24:10.863940 containerd[1566]: time="2025-05-27T03:24:10.863876576Z" level=info msg="StartContainer for \"c10b8b64682d742a9911e05bbba35585417c4ebbda365ae9dc54fd4aa0a896ed\" returns successfully" May 27 03:24:10.870466 containerd[1566]: time="2025-05-27T03:24:10.870258793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-865759b8fd-5m84j,Uid:662bc28a-aa93-4b26-9e56-f8dfc5b781cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea\"" May 27 03:24:11.059901 kubelet[2783]: I0527 03:24:11.059813 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c5nrz" podStartSLOduration=41.059786902 podStartE2EDuration="41.059786902s" podCreationTimestamp="2025-05-27 03:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:24:11.024580645 +0000 UTC m=+47.488613283" watchObservedRunningTime="2025-05-27 03:24:11.059786902 +0000 UTC m=+47.523819512" May 27 03:24:11.232144 systemd-networkd[1450]: cali98bce75e47b: Link UP May 27 03:24:11.234847 systemd-networkd[1450]: cali98bce75e47b: Gained carrier May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:10.935 [INFO][4648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0 coredns-668d6bf9bc- kube-system e0ea8984-88dc-48df-a02b-bebd3dda937c 809 0 2025-05-27 03:23:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal coredns-668d6bf9bc-6j5hq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali98bce75e47b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j5hq" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:10.937 [INFO][4648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j5hq" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.099 [INFO][4686] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" HandleID="k8s-pod-network.fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.100 [INFO][4686] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" HandleID="k8s-pod-network.fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003557f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-6j5hq", "timestamp":"2025-05-27 03:24:11.099440987 +0000 UTC"}, Hostname:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.101 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.101 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.101 [INFO][4686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal' May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.124 [INFO][4686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.146 [INFO][4686] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.162 [INFO][4686] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.164 [INFO][4686] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.173 [INFO][4686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.173 [INFO][4686] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.177 [INFO][4686] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7 May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.192 [INFO][4686] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.207 [INFO][4686] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.200/26] block=192.168.71.192/26 handle="k8s-pod-network.fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.208 [INFO][4686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.200/26] handle="k8s-pod-network.fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" host="ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal" May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.208 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:11.264674 containerd[1566]: 2025-05-27 03:24:11.208 [INFO][4686] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.200/26] IPv6=[] ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" HandleID="k8s-pod-network.fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Workload="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0" May 27 03:24:11.267043 containerd[1566]: 2025-05-27 03:24:11.216 [INFO][4648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j5hq" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e0ea8984-88dc-48df-a02b-bebd3dda937c", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-6j5hq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98bce75e47b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:11.267043 containerd[1566]: 2025-05-27 03:24:11.218 [INFO][4648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.200/32] ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j5hq" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0" May 27 03:24:11.267043 containerd[1566]: 2025-05-27 03:24:11.218 [INFO][4648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98bce75e47b ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j5hq" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0" May 27 03:24:11.267043 containerd[1566]: 2025-05-27 03:24:11.234 [INFO][4648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j5hq" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0" May 27 03:24:11.267043 containerd[1566]: 2025-05-27 03:24:11.237 [INFO][4648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j5hq" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e0ea8984-88dc-48df-a02b-bebd3dda937c", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-0-0-ae23e6d554a512f383fd.c.flatcar-212911.internal", ContainerID:"fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7", Pod:"coredns-668d6bf9bc-6j5hq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98bce75e47b", MAC:"46:f9:d3:5c:85:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:11.267043 containerd[1566]: 2025-05-27 03:24:11.259 [INFO][4648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" Namespace="kube-system" Pod="coredns-668d6bf9bc-6j5hq" WorkloadEndpoint="ci--4344--0--0--ae23e6d554a512f383fd.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--6j5hq-eth0" May 27 03:24:11.329073 containerd[1566]: time="2025-05-27T03:24:11.328691074Z" level=info msg="connecting to shim fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7" address="unix:///run/containerd/s/c94a584aff3861ffe74baf27a7000baf9e1ab0fe4582c721779519a7d83a8e02" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:11.387019 systemd[1]: Started cri-containerd-fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7.scope - libcontainer container fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7. May 27 03:24:11.519975 containerd[1566]: time="2025-05-27T03:24:11.518799607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6j5hq,Uid:e0ea8984-88dc-48df-a02b-bebd3dda937c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7\"" May 27 03:24:11.529431 containerd[1566]: time="2025-05-27T03:24:11.528330658Z" level=info msg="CreateContainer within sandbox \"fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:24:11.553390 containerd[1566]: time="2025-05-27T03:24:11.551965992Z" level=info msg="Container fe420a7f72e5992f987d6ac4a88f50c7b68b8703d15a80ac44d3023426c350ad: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:11.564109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005365040.mount: Deactivated successfully. May 27 03:24:11.576040 containerd[1566]: time="2025-05-27T03:24:11.575992761Z" level=info msg="CreateContainer within sandbox \"fec2dc9d65bb74b24c7c1697521686e72aa7dfa2543d0ed0cb6ed9dd0587dba7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe420a7f72e5992f987d6ac4a88f50c7b68b8703d15a80ac44d3023426c350ad\"" May 27 03:24:11.578298 containerd[1566]: time="2025-05-27T03:24:11.578262383Z" level=info msg="StartContainer for \"fe420a7f72e5992f987d6ac4a88f50c7b68b8703d15a80ac44d3023426c350ad\"" May 27 03:24:11.583671 containerd[1566]: time="2025-05-27T03:24:11.583634905Z" level=info msg="connecting to shim fe420a7f72e5992f987d6ac4a88f50c7b68b8703d15a80ac44d3023426c350ad" address="unix:///run/containerd/s/c94a584aff3861ffe74baf27a7000baf9e1ab0fe4582c721779519a7d83a8e02" protocol=ttrpc version=3 May 27 03:24:11.625732 systemd[1]: Started cri-containerd-fe420a7f72e5992f987d6ac4a88f50c7b68b8703d15a80ac44d3023426c350ad.scope - libcontainer container fe420a7f72e5992f987d6ac4a88f50c7b68b8703d15a80ac44d3023426c350ad. May 27 03:24:11.629724 systemd-networkd[1450]: cali294a337e5b4: Gained IPv6LL May 27 03:24:11.707036 containerd[1566]: time="2025-05-27T03:24:11.706980745Z" level=info msg="StartContainer for \"fe420a7f72e5992f987d6ac4a88f50c7b68b8703d15a80ac44d3023426c350ad\" returns successfully" May 27 03:24:11.821685 systemd-networkd[1450]: cali9ef92e2b045: Gained IPv6LL May 27 03:24:11.885904 systemd-networkd[1450]: calibf21ca10672: Gained IPv6LL May 27 03:24:12.026472 kubelet[2783]: I0527 03:24:12.026174 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6j5hq" podStartSLOduration=42.026128802 podStartE2EDuration="42.026128802s" podCreationTimestamp="2025-05-27 03:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:24:12.023519214 +0000 UTC m=+48.487551823" watchObservedRunningTime="2025-05-27 03:24:12.026128802 +0000 UTC m=+48.490161411" May 27 03:24:12.349952 containerd[1566]: time="2025-05-27T03:24:12.349744831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:12.351580 containerd[1566]: time="2025-05-27T03:24:12.351450430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:24:12.352589 containerd[1566]: time="2025-05-27T03:24:12.352501918Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:12.354971 containerd[1566]: time="2025-05-27T03:24:12.354891447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:12.356199 containerd[1566]: time="2025-05-27T03:24:12.355976668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 4.003088649s" May 27 03:24:12.356199 containerd[1566]: time="2025-05-27T03:24:12.356023694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:24:12.358049 containerd[1566]: time="2025-05-27T03:24:12.357993366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:24:12.360392 containerd[1566]: time="2025-05-27T03:24:12.360167237Z" level=info msg="CreateContainer within sandbox \"6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:24:12.369658 containerd[1566]: time="2025-05-27T03:24:12.369605981Z" level=info msg="Container 60a82e1adee8d02c6babad6c764008c5ca7a6e4a1e1f90fbc9990eb9cf88083c: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:12.387508 containerd[1566]: time="2025-05-27T03:24:12.387460194Z" level=info msg="CreateContainer within sandbox \"6f58495f2fbf610db3e4016b6161379ccdee4d51f119862440aabfa395a3dd02\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"60a82e1adee8d02c6babad6c764008c5ca7a6e4a1e1f90fbc9990eb9cf88083c\"" May 27 03:24:12.388300 containerd[1566]: time="2025-05-27T03:24:12.388198091Z" level=info msg="StartContainer for \"60a82e1adee8d02c6babad6c764008c5ca7a6e4a1e1f90fbc9990eb9cf88083c\"" May 27 03:24:12.390977 containerd[1566]: time="2025-05-27T03:24:12.390884634Z" level=info msg="connecting to shim 60a82e1adee8d02c6babad6c764008c5ca7a6e4a1e1f90fbc9990eb9cf88083c" address="unix:///run/containerd/s/312291263347be8e4aebeeae2b0561e7fb9c44c83cb79f662ae3f85ac48b5334" protocol=ttrpc version=3 May 27 03:24:12.429836 systemd[1]: Started cri-containerd-60a82e1adee8d02c6babad6c764008c5ca7a6e4a1e1f90fbc9990eb9cf88083c.scope - libcontainer container 60a82e1adee8d02c6babad6c764008c5ca7a6e4a1e1f90fbc9990eb9cf88083c. May 27 03:24:12.525702 systemd-networkd[1450]: cali98bce75e47b: Gained IPv6LL May 27 03:24:12.538945 containerd[1566]: time="2025-05-27T03:24:12.538896843Z" level=info msg="StartContainer for \"60a82e1adee8d02c6babad6c764008c5ca7a6e4a1e1f90fbc9990eb9cf88083c\" returns successfully" May 27 03:24:12.601461 containerd[1566]: time="2025-05-27T03:24:12.599743831Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:12.601461 containerd[1566]: time="2025-05-27T03:24:12.600836130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 03:24:12.605740 containerd[1566]: time="2025-05-27T03:24:12.605641530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 247.579728ms" May 27 03:24:12.605877 containerd[1566]: time="2025-05-27T03:24:12.605742058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:24:12.609830 containerd[1566]: time="2025-05-27T03:24:12.609405494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:12.612294 containerd[1566]: time="2025-05-27T03:24:12.612126737Z" level=info msg="CreateContainer within sandbox \"6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:24:12.627648 containerd[1566]: time="2025-05-27T03:24:12.626566862Z" level=info msg="Container 250154f119347d1da8e530b8c4028ec7382758d2c65348de287ff6d093350087: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:12.642140 containerd[1566]: time="2025-05-27T03:24:12.642095073Z" level=info msg="CreateContainer within sandbox \"6fcffa961a2d09165830da684d0a6a3d1298bc8c9e643eede6823c8c5366020e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"250154f119347d1da8e530b8c4028ec7382758d2c65348de287ff6d093350087\"" May 27 03:24:12.643562 containerd[1566]: time="2025-05-27T03:24:12.643520832Z" level=info msg="StartContainer for \"250154f119347d1da8e530b8c4028ec7382758d2c65348de287ff6d093350087\"" May 27 03:24:12.646178 containerd[1566]: time="2025-05-27T03:24:12.646071018Z" level=info msg="connecting to shim 250154f119347d1da8e530b8c4028ec7382758d2c65348de287ff6d093350087" address="unix:///run/containerd/s/8bcfca582bba83b47337717e45dd027bce72f9feeaf5b6a69d17fc0618286efb" protocol=ttrpc version=3 May 27 03:24:12.689714 systemd[1]: Started cri-containerd-250154f119347d1da8e530b8c4028ec7382758d2c65348de287ff6d093350087.scope - libcontainer container 250154f119347d1da8e530b8c4028ec7382758d2c65348de287ff6d093350087. May 27 03:24:12.753790 containerd[1566]: time="2025-05-27T03:24:12.753741450Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:12.757834 containerd[1566]: time="2025-05-27T03:24:12.757678076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:12.757834 containerd[1566]: time="2025-05-27T03:24:12.757718273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:12.758090 kubelet[2783]: E0527 03:24:12.757992 2783 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:12.758090 kubelet[2783]: E0527 03:24:12.758062 2783 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:12.760749 kubelet[2783]: E0527 03:24:12.759915 2783 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nfw7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-526wj_calico-system(fece5a2d-649d-4220-838d-a0db5be7df49): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:12.760911 containerd[1566]: time="2025-05-27T03:24:12.759659060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:24:12.762845 kubelet[2783]: E0527 03:24:12.761148 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-526wj" podUID="fece5a2d-649d-4220-838d-a0db5be7df49" May 27 03:24:12.827956 containerd[1566]: time="2025-05-27T03:24:12.827907874Z" level=info msg="StartContainer for \"250154f119347d1da8e530b8c4028ec7382758d2c65348de287ff6d093350087\" returns successfully" May 27 03:24:13.025494 kubelet[2783]: E0527 03:24:13.025442 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-526wj" podUID="fece5a2d-649d-4220-838d-a0db5be7df49" May 27 03:24:13.061793 kubelet[2783]: I0527 03:24:13.061665 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5854b5b658-n8nn4" podStartSLOduration=28.055196545 podStartE2EDuration="32.061641842s" podCreationTimestamp="2025-05-27 03:23:41 +0000 UTC" firstStartedPulling="2025-05-27 03:24:08.350905693 +0000 UTC m=+44.814938302" lastFinishedPulling="2025-05-27 03:24:12.357350992 +0000 UTC m=+48.821383599" observedRunningTime="2025-05-27 03:24:13.060231377 +0000 UTC m=+49.524263986" watchObservedRunningTime="2025-05-27 03:24:13.061641842 +0000 UTC m=+49.525674442" May 27 03:24:13.062791 kubelet[2783]: I0527 03:24:13.062725 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5854b5b658-fprlv" podStartSLOduration=27.850295156 podStartE2EDuration="32.062688168s" podCreationTimestamp="2025-05-27 03:23:41 +0000 UTC" firstStartedPulling="2025-05-27 03:24:08.394977292 +0000 UTC m=+44.859009898" lastFinishedPulling="2025-05-27 03:24:12.607370301 +0000 UTC m=+49.071402910" observedRunningTime="2025-05-27 03:24:13.041088845 +0000 UTC m=+49.505121454" watchObservedRunningTime="2025-05-27 03:24:13.062688168 +0000 UTC m=+49.526720782" May 27 03:24:13.986368 containerd[1566]: time="2025-05-27T03:24:13.986064505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:13.988451 containerd[1566]: time="2025-05-27T03:24:13.988408504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:24:13.990515 containerd[1566]: time="2025-05-27T03:24:13.990451425Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:13.995011 containerd[1566]: time="2025-05-27T03:24:13.994102176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:13.995258 containerd[1566]: time="2025-05-27T03:24:13.994991847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.235296907s" May 27 03:24:13.995487 containerd[1566]: time="2025-05-27T03:24:13.995464911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:24:14.005004 containerd[1566]: time="2025-05-27T03:24:14.004964852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:24:14.011781 containerd[1566]: time="2025-05-27T03:24:14.011675567Z" level=info msg="CreateContainer within sandbox \"9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:24:14.028198 kubelet[2783]: I0527 03:24:14.028166 2783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:14.044497 containerd[1566]: time="2025-05-27T03:24:14.042454968Z" level=info msg="Container cd7088e5ec1d350da8faf75f2da0577c6d97499b77652ba744e3e8778e5d3f6d: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:14.060240 containerd[1566]: time="2025-05-27T03:24:14.060201293Z" level=info msg="CreateContainer within sandbox \"9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cd7088e5ec1d350da8faf75f2da0577c6d97499b77652ba744e3e8778e5d3f6d\"" May 27 03:24:14.064717 containerd[1566]: time="2025-05-27T03:24:14.064685754Z" level=info msg="StartContainer for \"cd7088e5ec1d350da8faf75f2da0577c6d97499b77652ba744e3e8778e5d3f6d\"" May 27 03:24:14.068481 containerd[1566]: time="2025-05-27T03:24:14.067930906Z" level=info msg="connecting to shim cd7088e5ec1d350da8faf75f2da0577c6d97499b77652ba744e3e8778e5d3f6d" address="unix:///run/containerd/s/4fadd8e81198fd8b4a5e5498a164a8f98f20332f251d71021dabac24d6283cb7" protocol=ttrpc version=3 May 27 03:24:14.136567 systemd[1]: Started cri-containerd-cd7088e5ec1d350da8faf75f2da0577c6d97499b77652ba744e3e8778e5d3f6d.scope - libcontainer container cd7088e5ec1d350da8faf75f2da0577c6d97499b77652ba744e3e8778e5d3f6d. May 27 03:24:14.328476 containerd[1566]: time="2025-05-27T03:24:14.328189314Z" level=info msg="StartContainer for \"cd7088e5ec1d350da8faf75f2da0577c6d97499b77652ba744e3e8778e5d3f6d\" returns successfully" May 27 03:24:15.192110 ntpd[1545]: Listen normally on 7 vxlan.calico 192.168.71.192:123 May 27 03:24:15.193585 ntpd[1545]: 27 May 03:24:15 ntpd[1545]: Listen normally on 7 vxlan.calico 192.168.71.192:123 May 27 03:24:15.193585 ntpd[1545]: 27 May 03:24:15 ntpd[1545]: Listen normally on 8 calif44937ac65e [fe80::ecee:eeff:feee:eeee%4]:123 May 27 03:24:15.193585 ntpd[1545]: 27 May 03:24:15 ntpd[1545]: Listen normally on 9 vxlan.calico [fe80::6437:77ff:fe45:fa5d%5]:123 May 27 03:24:15.193585 ntpd[1545]: 27 May 03:24:15 ntpd[1545]: Listen normally on 10 cali5fba3258fde [fe80::ecee:eeff:feee:eeee%8]:123 May 27 03:24:15.193585 ntpd[1545]: 27 May 03:24:15 ntpd[1545]: Listen normally on 11 cali997e5acccca [fe80::ecee:eeff:feee:eeee%9]:123 May 27 03:24:15.193585 ntpd[1545]: 27 May 03:24:15 ntpd[1545]: Listen normally on 12 calia70e5048509 [fe80::ecee:eeff:feee:eeee%10]:123 May 27 03:24:15.193585 ntpd[1545]: 27 May 03:24:15 ntpd[1545]: Listen normally on 13 cali9ef92e2b045 [fe80::ecee:eeff:feee:eeee%11]:123 May 27 03:24:15.193585 ntpd[1545]: 27 May 03:24:15 ntpd[1545]: Listen normally on 14 cali294a337e5b4 [fe80::ecee:eeff:feee:eeee%12]:123 May 27 03:24:15.193585 ntpd[1545]: 27 May 03:24:15 ntpd[1545]: Listen normally on 15 calibf21ca10672 [fe80::ecee:eeff:feee:eeee%13]:123 May 27 03:24:15.193585 ntpd[1545]: 27 May 03:24:15 ntpd[1545]: Listen normally on 16 cali98bce75e47b [fe80::ecee:eeff:feee:eeee%14]:123 May 27 03:24:15.192235 ntpd[1545]: Listen normally on 8 calif44937ac65e [fe80::ecee:eeff:feee:eeee%4]:123 May 27 03:24:15.192837 ntpd[1545]: Listen normally on 9 vxlan.calico [fe80::6437:77ff:fe45:fa5d%5]:123 May 27 03:24:15.192910 ntpd[1545]: Listen normally on 10 cali5fba3258fde [fe80::ecee:eeff:feee:eeee%8]:123 May 27 03:24:15.193212 ntpd[1545]: Listen normally on 11 cali997e5acccca [fe80::ecee:eeff:feee:eeee%9]:123 May 27 03:24:15.193276 ntpd[1545]: Listen normally on 12 calia70e5048509 [fe80::ecee:eeff:feee:eeee%10]:123 May 27 03:24:15.193327 ntpd[1545]: Listen normally on 13 cali9ef92e2b045 [fe80::ecee:eeff:feee:eeee%11]:123 May 27 03:24:15.193398 ntpd[1545]: Listen normally on 14 cali294a337e5b4 [fe80::ecee:eeff:feee:eeee%12]:123 May 27 03:24:15.193454 ntpd[1545]: Listen normally on 15 calibf21ca10672 [fe80::ecee:eeff:feee:eeee%13]:123 May 27 03:24:15.193513 ntpd[1545]: Listen normally on 16 cali98bce75e47b [fe80::ecee:eeff:feee:eeee%14]:123 May 27 03:24:17.177392 containerd[1566]: time="2025-05-27T03:24:17.176894079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:17.182414 containerd[1566]: time="2025-05-27T03:24:17.182363768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:24:17.184055 containerd[1566]: time="2025-05-27T03:24:17.183989176Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:17.194775 containerd[1566]: time="2025-05-27T03:24:17.194734108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:17.197024 containerd[1566]: time="2025-05-27T03:24:17.196859090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 3.188142108s" May 27 03:24:17.197438 containerd[1566]: time="2025-05-27T03:24:17.197403217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:24:17.199611 containerd[1566]: time="2025-05-27T03:24:17.199566717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:24:17.218314 containerd[1566]: time="2025-05-27T03:24:17.217583645Z" level=info msg="CreateContainer within sandbox \"88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:24:17.242299 containerd[1566]: time="2025-05-27T03:24:17.241144197Z" level=info msg="Container f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:17.261939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2097764775.mount: Deactivated successfully. May 27 03:24:17.266690 containerd[1566]: time="2025-05-27T03:24:17.266287661Z" level=info msg="CreateContainer within sandbox \"88f28ee626f921e01ec6c7415127dd11b2bddacb26c57f5c9142f530c0ad6aea\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e\"" May 27 03:24:17.269170 containerd[1566]: time="2025-05-27T03:24:17.269115344Z" level=info msg="StartContainer for \"f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e\"" May 27 03:24:17.271499 containerd[1566]: time="2025-05-27T03:24:17.271232311Z" level=info msg="connecting to shim f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e" address="unix:///run/containerd/s/529c8f7d7f31c57023109f81d2c802dd019d570f99eb0b9467ae098579d6de19" protocol=ttrpc version=3 May 27 03:24:17.312722 systemd[1]: Started cri-containerd-f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e.scope - libcontainer container f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e. May 27 03:24:17.414374 containerd[1566]: time="2025-05-27T03:24:17.414295833Z" level=info msg="StartContainer for \"f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e\" returns successfully" May 27 03:24:18.141289 containerd[1566]: time="2025-05-27T03:24:18.141217989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e\" id:\"0e68ffff1286c834dcb78375226ecccd749dbe56cbfd3a25f46f571304723927\" pid:4974 exited_at:{seconds:1748316258 nanos:140417070}" May 27 03:24:18.159559 kubelet[2783]: I0527 03:24:18.159483 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-865759b8fd-5m84j" podStartSLOduration=25.838161315 podStartE2EDuration="32.159457946s" podCreationTimestamp="2025-05-27 03:23:46 +0000 UTC" firstStartedPulling="2025-05-27 03:24:10.877454171 +0000 UTC m=+47.341486778" lastFinishedPulling="2025-05-27 03:24:17.198750811 +0000 UTC m=+53.662783409" observedRunningTime="2025-05-27 03:24:18.091123711 +0000 UTC m=+54.555156320" watchObservedRunningTime="2025-05-27 03:24:18.159457946 +0000 UTC m=+54.623490558" May 27 03:24:19.049616 containerd[1566]: time="2025-05-27T03:24:19.049539503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:19.050963 containerd[1566]: time="2025-05-27T03:24:19.050915580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:24:19.052239 containerd[1566]: time="2025-05-27T03:24:19.052174951Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:19.055511 containerd[1566]: time="2025-05-27T03:24:19.055177581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:19.056081 containerd[1566]: time="2025-05-27T03:24:19.056021165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.856408446s" May 27 03:24:19.056081 containerd[1566]: time="2025-05-27T03:24:19.056071362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:24:19.062469 containerd[1566]: time="2025-05-27T03:24:19.061891810Z" level=info msg="CreateContainer within sandbox \"9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:24:19.076578 containerd[1566]: time="2025-05-27T03:24:19.076535820Z" level=info msg="Container dca0237d0ac7cc2fadbce77a847328e0a0692b9aa75917b32a81f670074fa8ba: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:19.089661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205869725.mount: Deactivated successfully. May 27 03:24:19.097317 containerd[1566]: time="2025-05-27T03:24:19.097258568Z" level=info msg="CreateContainer within sandbox \"9b082d143b96a82326b472b58f706882ca8b3ebde85953791d4030e5bea00c5a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dca0237d0ac7cc2fadbce77a847328e0a0692b9aa75917b32a81f670074fa8ba\"" May 27 03:24:19.098632 containerd[1566]: time="2025-05-27T03:24:19.098239045Z" level=info msg="StartContainer for \"dca0237d0ac7cc2fadbce77a847328e0a0692b9aa75917b32a81f670074fa8ba\"" May 27 03:24:19.102624 containerd[1566]: time="2025-05-27T03:24:19.102585288Z" level=info msg="connecting to shim dca0237d0ac7cc2fadbce77a847328e0a0692b9aa75917b32a81f670074fa8ba" address="unix:///run/containerd/s/4fadd8e81198fd8b4a5e5498a164a8f98f20332f251d71021dabac24d6283cb7" protocol=ttrpc version=3 May 27 03:24:19.139663 systemd[1]: Started cri-containerd-dca0237d0ac7cc2fadbce77a847328e0a0692b9aa75917b32a81f670074fa8ba.scope - libcontainer container dca0237d0ac7cc2fadbce77a847328e0a0692b9aa75917b32a81f670074fa8ba. May 27 03:24:19.203546 containerd[1566]: time="2025-05-27T03:24:19.203464776Z" level=info msg="StartContainer for \"dca0237d0ac7cc2fadbce77a847328e0a0692b9aa75917b32a81f670074fa8ba\" returns successfully" May 27 03:24:19.721724 containerd[1566]: time="2025-05-27T03:24:19.721593589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:19.839434 kubelet[2783]: I0527 03:24:19.839390 2783 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:24:19.839434 kubelet[2783]: I0527 03:24:19.839440 2783 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:24:19.847181 containerd[1566]: time="2025-05-27T03:24:19.847110798Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:19.849170 containerd[1566]: time="2025-05-27T03:24:19.849029084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:19.849313 containerd[1566]: time="2025-05-27T03:24:19.849116298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:19.850533 kubelet[2783]: E0527 03:24:19.850478 2783 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:19.850680 kubelet[2783]: E0527 03:24:19.850537 2783 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:19.851035 kubelet[2783]: E0527 03:24:19.850682 2783 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ce6e86134e5b48b79055a94e0b2c2b01,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44mtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5885f54b6c-n2m2n_calico-system(58f6d86c-3e78-41cf-91c0-b624abdbabc6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:19.853575 containerd[1566]: time="2025-05-27T03:24:19.853540036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:19.977430 containerd[1566]: time="2025-05-27T03:24:19.977240888Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:19.979121 containerd[1566]: time="2025-05-27T03:24:19.979043203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:19.979414 containerd[1566]: time="2025-05-27T03:24:19.979089397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:19.979480 kubelet[2783]: E0527 03:24:19.979436 2783 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:19.979593 kubelet[2783]: E0527 03:24:19.979502 2783 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:19.979731 kubelet[2783]: E0527 03:24:19.979661 2783 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44mtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5885f54b6c-n2m2n_calico-system(58f6d86c-3e78-41cf-91c0-b624abdbabc6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:19.981363 kubelet[2783]: E0527 03:24:19.981194 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5885f54b6c-n2m2n" podUID="58f6d86c-3e78-41cf-91c0-b624abdbabc6" May 27 03:24:20.085173 kubelet[2783]: I0527 03:24:20.083976 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r5t8w" podStartSLOduration=25.853373963 podStartE2EDuration="34.08395425s" podCreationTimestamp="2025-05-27 03:23:46 +0000 UTC" firstStartedPulling="2025-05-27 03:24:10.827651781 +0000 UTC m=+47.291684386" lastFinishedPulling="2025-05-27 03:24:19.058232072 +0000 UTC m=+55.522264673" observedRunningTime="2025-05-27 03:24:20.083427226 +0000 UTC m=+56.547459837" watchObservedRunningTime="2025-05-27 03:24:20.08395425 +0000 UTC m=+56.547986864" May 27 03:24:27.722467 containerd[1566]: time="2025-05-27T03:24:27.721463560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:27.851913 containerd[1566]: time="2025-05-27T03:24:27.851836881Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:27.853673 containerd[1566]: time="2025-05-27T03:24:27.853607740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:27.853910 containerd[1566]: time="2025-05-27T03:24:27.853749142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:27.854111 kubelet[2783]: E0527 03:24:27.854010 2783 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:27.854111 kubelet[2783]: E0527 03:24:27.854082 2783 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:27.855176 kubelet[2783]: E0527 03:24:27.854269 2783 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nfw7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-526wj_calico-system(fece5a2d-649d-4220-838d-a0db5be7df49): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:27.855662 kubelet[2783]: E0527 03:24:27.855602 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-526wj" podUID="fece5a2d-649d-4220-838d-a0db5be7df49" May 27 03:24:28.650437 systemd[1]: Started sshd@9-10.128.0.39:22-139.178.68.195:56882.service - OpenSSH per-connection server daemon (139.178.68.195:56882). May 27 03:24:28.961888 sshd[5036]: Accepted publickey for core from 139.178.68.195 port 56882 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:24:28.963602 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:28.971501 systemd-logind[1551]: New session 10 of user core. May 27 03:24:28.977617 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:24:29.299517 sshd[5038]: Connection closed by 139.178.68.195 port 56882 May 27 03:24:29.301165 sshd-session[5036]: pam_unix(sshd:session): session closed for user core May 27 03:24:29.309256 systemd[1]: sshd@9-10.128.0.39:22-139.178.68.195:56882.service: Deactivated successfully. May 27 03:24:29.317759 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:24:29.320410 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. May 27 03:24:29.323329 systemd-logind[1551]: Removed session 10. May 27 03:24:31.721961 kubelet[2783]: E0527 03:24:31.721792 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5885f54b6c-n2m2n" podUID="58f6d86c-3e78-41cf-91c0-b624abdbabc6" May 27 03:24:34.354749 systemd[1]: Started sshd@10-10.128.0.39:22-139.178.68.195:49154.service - OpenSSH per-connection server daemon (139.178.68.195:49154). May 27 03:24:34.658317 sshd[5055]: Accepted publickey for core from 139.178.68.195 port 49154 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:24:34.660436 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:34.668327 systemd-logind[1551]: New session 11 of user core. May 27 03:24:34.673564 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:24:34.950744 sshd[5059]: Connection closed by 139.178.68.195 port 49154 May 27 03:24:34.952444 sshd-session[5055]: pam_unix(sshd:session): session closed for user core May 27 03:24:34.961426 systemd[1]: sshd@10-10.128.0.39:22-139.178.68.195:49154.service: Deactivated successfully. May 27 03:24:34.966692 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:24:34.970577 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. May 27 03:24:34.973944 systemd-logind[1551]: Removed session 11. May 27 03:24:35.044660 containerd[1566]: time="2025-05-27T03:24:35.044562477Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62\" id:\"156017080b616ae7d7f0a86bcf22ed4ae75b58cf3ac0ceee25d51a9cc19313d4\" pid:5081 exited_at:{seconds:1748316275 nanos:44020388}" May 27 03:24:39.895517 kubelet[2783]: I0527 03:24:39.895243 2783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:40.012386 systemd[1]: Started sshd@11-10.128.0.39:22-139.178.68.195:49160.service - OpenSSH per-connection server daemon (139.178.68.195:49160). May 27 03:24:40.311598 sshd[5098]: Accepted publickey for core from 139.178.68.195 port 49160 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:24:40.313387 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:40.320781 systemd-logind[1551]: New session 12 of user core. May 27 03:24:40.331545 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:24:40.607331 sshd[5100]: Connection closed by 139.178.68.195 port 49160 May 27 03:24:40.608553 sshd-session[5098]: pam_unix(sshd:session): session closed for user core May 27 03:24:40.614659 systemd[1]: sshd@11-10.128.0.39:22-139.178.68.195:49160.service: Deactivated successfully. May 27 03:24:40.618175 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:24:40.619890 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. May 27 03:24:40.622575 systemd-logind[1551]: Removed session 12. May 27 03:24:40.669968 systemd[1]: Started sshd@12-10.128.0.39:22-139.178.68.195:49170.service - OpenSSH per-connection server daemon (139.178.68.195:49170). May 27 03:24:40.720369 kubelet[2783]: E0527 03:24:40.720142 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-526wj" podUID="fece5a2d-649d-4220-838d-a0db5be7df49" May 27 03:24:40.975228 sshd[5113]: Accepted publickey for core from 139.178.68.195 port 49170 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:24:40.976980 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:40.985063 systemd-logind[1551]: New session 13 of user core. May 27 03:24:40.992575 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:24:41.317072 sshd[5115]: Connection closed by 139.178.68.195 port 49170 May 27 03:24:41.317990 sshd-session[5113]: pam_unix(sshd:session): session closed for user core May 27 03:24:41.324210 systemd[1]: sshd@12-10.128.0.39:22-139.178.68.195:49170.service: Deactivated successfully. May 27 03:24:41.327368 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:24:41.329090 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. May 27 03:24:41.331762 systemd-logind[1551]: Removed session 13. May 27 03:24:41.372004 systemd[1]: Started sshd@13-10.128.0.39:22-139.178.68.195:49172.service - OpenSSH per-connection server daemon (139.178.68.195:49172). May 27 03:24:41.677282 sshd[5125]: Accepted publickey for core from 139.178.68.195 port 49172 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:24:41.679116 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:41.686640 systemd-logind[1551]: New session 14 of user core. May 27 03:24:41.692609 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:24:41.974485 sshd[5127]: Connection closed by 139.178.68.195 port 49172 May 27 03:24:41.974687 sshd-session[5125]: pam_unix(sshd:session): session closed for user core May 27 03:24:41.982949 systemd[1]: sshd@13-10.128.0.39:22-139.178.68.195:49172.service: Deactivated successfully. May 27 03:24:41.988990 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:24:41.994227 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. May 27 03:24:41.997201 systemd-logind[1551]: Removed session 14. May 27 03:24:44.721820 containerd[1566]: time="2025-05-27T03:24:44.721680487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:44.844648 containerd[1566]: time="2025-05-27T03:24:44.844584468Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:44.846234 containerd[1566]: time="2025-05-27T03:24:44.846175550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:44.846537 containerd[1566]: time="2025-05-27T03:24:44.846306681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:44.846653 kubelet[2783]: E0527 03:24:44.846577 2783 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:44.847767 kubelet[2783]: E0527 03:24:44.846669 2783 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:44.847767 kubelet[2783]: E0527 03:24:44.846944 2783 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ce6e86134e5b48b79055a94e0b2c2b01,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44mtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5885f54b6c-n2m2n_calico-system(58f6d86c-3e78-41cf-91c0-b624abdbabc6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:44.849901 containerd[1566]: time="2025-05-27T03:24:44.849845103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:44.970724 containerd[1566]: time="2025-05-27T03:24:44.970663199Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:44.972800 containerd[1566]: time="2025-05-27T03:24:44.972575800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:44.972800 containerd[1566]: time="2025-05-27T03:24:44.972578104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:44.973613 kubelet[2783]: E0527 03:24:44.972935 2783 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:44.973613 kubelet[2783]: E0527 03:24:44.973019 2783 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:44.973613 kubelet[2783]: E0527 03:24:44.973208 2783 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44mtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5885f54b6c-n2m2n_calico-system(58f6d86c-3e78-41cf-91c0-b624abdbabc6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:44.974630 kubelet[2783]: E0527 03:24:44.974552 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5885f54b6c-n2m2n" podUID="58f6d86c-3e78-41cf-91c0-b624abdbabc6" May 27 03:24:47.034061 systemd[1]: Started sshd@14-10.128.0.39:22-139.178.68.195:41770.service - OpenSSH per-connection server daemon (139.178.68.195:41770). May 27 03:24:47.359915 sshd[5149]: Accepted publickey for core from 139.178.68.195 port 41770 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:24:47.362901 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:47.371615 systemd-logind[1551]: New session 15 of user core. May 27 03:24:47.378678 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:24:47.715182 sshd[5151]: Connection closed by 139.178.68.195 port 41770 May 27 03:24:47.714099 sshd-session[5149]: pam_unix(sshd:session): session closed for user core May 27 03:24:47.725953 systemd[1]: sshd@14-10.128.0.39:22-139.178.68.195:41770.service: Deactivated successfully. May 27 03:24:47.726667 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. May 27 03:24:47.733601 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:24:47.738328 systemd-logind[1551]: Removed session 15. May 27 03:24:48.145191 containerd[1566]: time="2025-05-27T03:24:48.145133011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e\" id:\"9b7a3d69d373d788237927abce8dec9adeec1a616b847b6197b05be8e6060026\" pid:5174 exited_at:{seconds:1748316288 nanos:144434228}" May 27 03:24:52.771065 systemd[1]: Started sshd@15-10.128.0.39:22-139.178.68.195:41776.service - OpenSSH per-connection server daemon (139.178.68.195:41776). May 27 03:24:53.073440 sshd[5187]: Accepted publickey for core from 139.178.68.195 port 41776 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:24:53.075163 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:53.083007 systemd-logind[1551]: New session 16 of user core. May 27 03:24:53.088606 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:24:53.376981 sshd[5189]: Connection closed by 139.178.68.195 port 41776 May 27 03:24:53.378317 sshd-session[5187]: pam_unix(sshd:session): session closed for user core May 27 03:24:53.384298 systemd[1]: sshd@15-10.128.0.39:22-139.178.68.195:41776.service: Deactivated successfully. May 27 03:24:53.387003 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:24:53.388636 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. May 27 03:24:53.391058 systemd-logind[1551]: Removed session 16. May 27 03:24:54.720757 containerd[1566]: time="2025-05-27T03:24:54.720605620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:54.852741 containerd[1566]: time="2025-05-27T03:24:54.852664267Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:54.854428 containerd[1566]: time="2025-05-27T03:24:54.854356755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:54.854686 containerd[1566]: time="2025-05-27T03:24:54.854377504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:54.854835 kubelet[2783]: E0527 03:24:54.854741 2783 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:54.854835 kubelet[2783]: E0527 03:24:54.854807 2783 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:54.855769 kubelet[2783]: E0527 03:24:54.854989 2783 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nfw7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-526wj_calico-system(fece5a2d-649d-4220-838d-a0db5be7df49): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:54.856766 kubelet[2783]: E0527 03:24:54.856702 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-526wj" podUID="fece5a2d-649d-4220-838d-a0db5be7df49" May 27 03:24:57.724523 kubelet[2783]: E0527 03:24:57.724442 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5885f54b6c-n2m2n" podUID="58f6d86c-3e78-41cf-91c0-b624abdbabc6" May 27 03:24:58.430970 systemd[1]: Started sshd@16-10.128.0.39:22-139.178.68.195:58856.service - OpenSSH per-connection server daemon (139.178.68.195:58856). May 27 03:24:58.745179 sshd[5200]: Accepted publickey for core from 139.178.68.195 port 58856 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:24:58.747136 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:58.754364 systemd-logind[1551]: New session 17 of user core. May 27 03:24:58.760563 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:24:59.103532 sshd[5202]: Connection closed by 139.178.68.195 port 58856 May 27 03:24:59.104303 sshd-session[5200]: pam_unix(sshd:session): session closed for user core May 27 03:24:59.115510 systemd[1]: sshd@16-10.128.0.39:22-139.178.68.195:58856.service: Deactivated successfully. May 27 03:24:59.119066 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:24:59.121498 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. May 27 03:24:59.125297 systemd-logind[1551]: Removed session 17. May 27 03:24:59.164721 systemd[1]: Started sshd@17-10.128.0.39:22-139.178.68.195:58868.service - OpenSSH per-connection server daemon (139.178.68.195:58868). May 27 03:24:59.497773 sshd[5214]: Accepted publickey for core from 139.178.68.195 port 58868 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:24:59.501314 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:59.514829 systemd-logind[1551]: New session 18 of user core. May 27 03:24:59.520285 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:24:59.895212 sshd[5216]: Connection closed by 139.178.68.195 port 58868 May 27 03:24:59.896967 sshd-session[5214]: pam_unix(sshd:session): session closed for user core May 27 03:24:59.906411 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. May 27 03:24:59.908645 systemd[1]: sshd@17-10.128.0.39:22-139.178.68.195:58868.service: Deactivated successfully. May 27 03:24:59.916111 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:24:59.920097 systemd-logind[1551]: Removed session 18. May 27 03:24:59.954827 systemd[1]: Started sshd@18-10.128.0.39:22-139.178.68.195:58874.service - OpenSSH per-connection server daemon (139.178.68.195:58874). May 27 03:25:00.283803 sshd[5226]: Accepted publickey for core from 139.178.68.195 port 58874 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:25:00.286919 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:00.296077 systemd-logind[1551]: New session 19 of user core. May 27 03:25:00.307563 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:25:01.534997 sshd[5228]: Connection closed by 139.178.68.195 port 58874 May 27 03:25:01.535858 sshd-session[5226]: pam_unix(sshd:session): session closed for user core May 27 03:25:01.545067 systemd[1]: sshd@18-10.128.0.39:22-139.178.68.195:58874.service: Deactivated successfully. May 27 03:25:01.546124 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. May 27 03:25:01.551211 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:25:01.560812 systemd-logind[1551]: Removed session 19. May 27 03:25:01.595947 systemd[1]: Started sshd@19-10.128.0.39:22-139.178.68.195:58886.service - OpenSSH per-connection server daemon (139.178.68.195:58886). May 27 03:25:01.936378 sshd[5245]: Accepted publickey for core from 139.178.68.195 port 58886 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:25:01.939379 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:01.949283 systemd-logind[1551]: New session 20 of user core. May 27 03:25:01.959246 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:25:02.587210 sshd[5250]: Connection closed by 139.178.68.195 port 58886 May 27 03:25:02.589353 sshd-session[5245]: pam_unix(sshd:session): session closed for user core May 27 03:25:02.598991 systemd[1]: sshd@19-10.128.0.39:22-139.178.68.195:58886.service: Deactivated successfully. May 27 03:25:02.605494 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:25:02.610225 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. May 27 03:25:02.613492 systemd-logind[1551]: Removed session 20. May 27 03:25:02.645901 systemd[1]: Started sshd@20-10.128.0.39:22-139.178.68.195:58894.service - OpenSSH per-connection server daemon (139.178.68.195:58894). May 27 03:25:02.977062 sshd[5260]: Accepted publickey for core from 139.178.68.195 port 58894 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:25:02.979754 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:02.991667 systemd-logind[1551]: New session 21 of user core. May 27 03:25:02.999634 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:25:03.307323 sshd[5262]: Connection closed by 139.178.68.195 port 58894 May 27 03:25:03.305590 sshd-session[5260]: pam_unix(sshd:session): session closed for user core May 27 03:25:03.315416 systemd-logind[1551]: Session 21 logged out. Waiting for processes to exit. May 27 03:25:03.316969 systemd[1]: sshd@20-10.128.0.39:22-139.178.68.195:58894.service: Deactivated successfully. May 27 03:25:03.324783 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:25:03.330697 systemd-logind[1551]: Removed session 21. May 27 03:25:05.095672 containerd[1566]: time="2025-05-27T03:25:05.095613885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb37a655541746f7b2c7510859e79d4be1c216d2f94df455f8a21d100153fc62\" id:\"213a761167551c0b6330bc68245abd2f2fbeb821ccbe7a75d52bd99d98a442af\" pid:5288 exited_at:{seconds:1748316305 nanos:94969106}" May 27 03:25:05.724243 kubelet[2783]: E0527 03:25:05.724156 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-526wj" podUID="fece5a2d-649d-4220-838d-a0db5be7df49" May 27 03:25:08.367732 systemd[1]: Started sshd@21-10.128.0.39:22-139.178.68.195:45832.service - OpenSSH per-connection server daemon (139.178.68.195:45832). May 27 03:25:08.707950 sshd[5300]: Accepted publickey for core from 139.178.68.195 port 45832 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:25:08.710632 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:08.721681 systemd-logind[1551]: New session 22 of user core. May 27 03:25:08.731647 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:25:09.038617 sshd[5304]: Connection closed by 139.178.68.195 port 45832 May 27 03:25:09.040123 sshd-session[5300]: pam_unix(sshd:session): session closed for user core May 27 03:25:09.048431 systemd[1]: sshd@21-10.128.0.39:22-139.178.68.195:45832.service: Deactivated successfully. May 27 03:25:09.053561 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:25:09.058454 systemd-logind[1551]: Session 22 logged out. Waiting for processes to exit. May 27 03:25:09.061943 systemd-logind[1551]: Removed session 22. May 27 03:25:10.722890 kubelet[2783]: E0527 03:25:10.722320 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5885f54b6c-n2m2n" podUID="58f6d86c-3e78-41cf-91c0-b624abdbabc6" May 27 03:25:14.103480 systemd[1]: Started sshd@22-10.128.0.39:22-139.178.68.195:45058.service - OpenSSH per-connection server daemon (139.178.68.195:45058). May 27 03:25:14.441511 sshd[5317]: Accepted publickey for core from 139.178.68.195 port 45058 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:25:14.444988 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:14.454563 systemd-logind[1551]: New session 23 of user core. May 27 03:25:14.460781 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:25:14.806127 sshd[5319]: Connection closed by 139.178.68.195 port 45058 May 27 03:25:14.808775 sshd-session[5317]: pam_unix(sshd:session): session closed for user core May 27 03:25:14.822105 systemd[1]: sshd@22-10.128.0.39:22-139.178.68.195:45058.service: Deactivated successfully. May 27 03:25:14.826301 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:25:14.829592 systemd-logind[1551]: Session 23 logged out. Waiting for processes to exit. May 27 03:25:14.833534 systemd-logind[1551]: Removed session 23. May 27 03:25:17.724378 kubelet[2783]: E0527 03:25:17.723935 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-526wj" podUID="fece5a2d-649d-4220-838d-a0db5be7df49" May 27 03:25:18.146604 containerd[1566]: time="2025-05-27T03:25:18.146548025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e\" id:\"24deb66e6ef2e4f0012f5a53d040185699d9f34eedb2f98cd3504f6b965a10b1\" pid:5342 exited_at:{seconds:1748316318 nanos:145983225}" May 27 03:25:18.412714 containerd[1566]: time="2025-05-27T03:25:18.412076401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f37fce5d20f9079db20a7c33cb48b289e4d535267579b0198840506e816c6f1e\" id:\"4e5014cbab1647a577b089132b2f2914371d05c2b30e61a9defd0841cab43fe3\" pid:5365 exited_at:{seconds:1748316318 nanos:411414113}" May 27 03:25:19.867166 systemd[1]: Started sshd@23-10.128.0.39:22-139.178.68.195:45066.service - OpenSSH per-connection server daemon (139.178.68.195:45066). May 27 03:25:20.186614 sshd[5375]: Accepted publickey for core from 139.178.68.195 port 45066 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:25:20.189021 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:20.196536 systemd-logind[1551]: New session 24 of user core. May 27 03:25:20.204539 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:25:20.538490 sshd[5377]: Connection closed by 139.178.68.195 port 45066 May 27 03:25:20.539600 sshd-session[5375]: pam_unix(sshd:session): session closed for user core May 27 03:25:20.549849 systemd[1]: sshd@23-10.128.0.39:22-139.178.68.195:45066.service: Deactivated successfully. May 27 03:25:20.554974 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:25:20.560735 systemd-logind[1551]: Session 24 logged out. Waiting for processes to exit. May 27 03:25:20.564836 systemd-logind[1551]: Removed session 24. May 27 03:25:23.729731 kubelet[2783]: E0527 03:25:23.729609 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5885f54b6c-n2m2n" podUID="58f6d86c-3e78-41cf-91c0-b624abdbabc6" May 27 03:25:25.598706 systemd[1]: Started sshd@24-10.128.0.39:22-139.178.68.195:45524.service - OpenSSH per-connection server daemon (139.178.68.195:45524). May 27 03:25:25.930019 sshd[5391]: Accepted publickey for core from 139.178.68.195 port 45524 ssh2: RSA SHA256:mnqIxIZFyVqRuNCAlkbJ6p3n3UNDE1dveAMtAGCHZnk May 27 03:25:25.932401 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:25.944402 systemd-logind[1551]: New session 25 of user core. May 27 03:25:25.953042 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:25:26.272399 sshd[5393]: Connection closed by 139.178.68.195 port 45524 May 27 03:25:26.274612 sshd-session[5391]: pam_unix(sshd:session): session closed for user core May 27 03:25:26.287641 systemd[1]: sshd@24-10.128.0.39:22-139.178.68.195:45524.service: Deactivated successfully. May 27 03:25:26.294706 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:25:26.296635 systemd-logind[1551]: Session 25 logged out. Waiting for processes to exit. May 27 03:25:26.300813 systemd-logind[1551]: Removed session 25.