May 27 17:36:48.860299 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 17:36:48.860329 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:36:48.860340 kernel: BIOS-provided physical RAM map: May 27 17:36:48.860347 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable May 27 17:36:48.860353 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved May 27 17:36:48.860360 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable May 27 17:36:48.860368 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved May 27 17:36:48.860375 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable May 27 17:36:48.860381 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 27 17:36:48.860388 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 27 17:36:48.860395 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 27 17:36:48.860404 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 27 17:36:48.860410 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 27 17:36:48.860417 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 27 17:36:48.860425 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 27 17:36:48.860432 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved May 27 17:36:48.860444 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 17:36:48.860452 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 17:36:48.860459 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 17:36:48.860466 kernel: NX (Execute Disable) protection: active May 27 17:36:48.860473 kernel: APIC: Static calls initialized May 27 17:36:48.860480 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable May 27 17:36:48.860488 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable May 27 17:36:48.860495 kernel: extended physical RAM map: May 27 17:36:48.860502 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable May 27 17:36:48.860525 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved May 27 17:36:48.860533 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable May 27 17:36:48.860542 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved May 27 17:36:48.860549 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable May 27 17:36:48.860556 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable May 27 17:36:48.860563 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable May 27 17:36:48.860570 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable May 27 17:36:48.860577 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable May 27 17:36:48.860584 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 27 17:36:48.860591 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 27 17:36:48.860599 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 27 17:36:48.860606 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 27 17:36:48.860613 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 27 17:36:48.860622 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 27 17:36:48.860629 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 27 17:36:48.860640 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved May 27 17:36:48.860647 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 17:36:48.860655 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 17:36:48.860662 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 17:36:48.860671 kernel: efi: EFI v2.7 by EDK II May 27 17:36:48.860679 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 May 27 17:36:48.860686 kernel: random: crng init done May 27 17:36:48.860694 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 27 17:36:48.860701 kernel: secureboot: Secure boot enabled May 27 17:36:48.860708 kernel: SMBIOS 2.8 present. May 27 17:36:48.860716 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 27 17:36:48.860723 kernel: DMI: Memory slots populated: 1/1 May 27 17:36:48.860730 kernel: Hypervisor detected: KVM May 27 17:36:48.860738 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 17:36:48.860745 kernel: kvm-clock: using sched offset of 5777626607 cycles May 27 17:36:48.860764 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:36:48.860772 kernel: tsc: Detected 2794.748 MHz processor May 27 17:36:48.860780 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 17:36:48.860787 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 17:36:48.860795 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 May 27 17:36:48.860802 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 17:36:48.860813 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 17:36:48.860820 kernel: Using GB pages for direct mapping May 27 17:36:48.860829 kernel: ACPI: Early table checksum verification disabled May 27 17:36:48.860839 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) May 27 17:36:48.860847 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 27 17:36:48.860855 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:36:48.860862 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:36:48.860870 kernel: ACPI: FACS 0x000000009BBDD000 000040 May 27 17:36:48.860877 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:36:48.860885 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:36:48.860892 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:36:48.860903 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:36:48.860910 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 27 17:36:48.860918 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] May 27 17:36:48.860925 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] May 27 17:36:48.860933 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] May 27 17:36:48.860940 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] May 27 17:36:48.860948 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] May 27 17:36:48.860955 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] May 27 17:36:48.860963 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] May 27 17:36:48.860970 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] May 27 17:36:48.860980 kernel: No NUMA configuration found May 27 17:36:48.860987 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] May 27 17:36:48.860995 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] May 27 17:36:48.861003 kernel: Zone ranges: May 27 17:36:48.861010 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 17:36:48.861018 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] May 27 17:36:48.861025 kernel: Normal empty May 27 17:36:48.861033 kernel: Device empty May 27 17:36:48.861040 kernel: Movable zone start for each node May 27 17:36:48.861050 kernel: Early memory node ranges May 27 17:36:48.861057 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] May 27 17:36:48.861064 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] May 27 17:36:48.861072 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] May 27 17:36:48.861079 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] May 27 17:36:48.861087 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] May 27 17:36:48.861094 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] May 27 17:36:48.861102 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 17:36:48.861109 kernel: On node 0, zone DMA: 32 pages in unavailable ranges May 27 17:36:48.861117 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 17:36:48.861127 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 17:36:48.861134 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 27 17:36:48.861142 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges May 27 17:36:48.861149 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 17:36:48.861157 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 17:36:48.861164 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 17:36:48.861172 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 17:36:48.861179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 17:36:48.861189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 17:36:48.861199 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 17:36:48.861207 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 17:36:48.861214 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 17:36:48.861222 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 17:36:48.861229 kernel: TSC deadline timer available May 27 17:36:48.861237 kernel: CPU topo: Max. logical packages: 1 May 27 17:36:48.861244 kernel: CPU topo: Max. logical dies: 1 May 27 17:36:48.861252 kernel: CPU topo: Max. dies per package: 1 May 27 17:36:48.861269 kernel: CPU topo: Max. threads per core: 1 May 27 17:36:48.861276 kernel: CPU topo: Num. cores per package: 4 May 27 17:36:48.861284 kernel: CPU topo: Num. threads per package: 4 May 27 17:36:48.861292 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 17:36:48.861302 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 17:36:48.861310 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 17:36:48.861317 kernel: kvm-guest: setup PV sched yield May 27 17:36:48.861325 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 27 17:36:48.861333 kernel: Booting paravirtualized kernel on KVM May 27 17:36:48.861344 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 17:36:48.861352 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 17:36:48.861360 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 17:36:48.861368 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 17:36:48.861375 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 17:36:48.861383 kernel: kvm-guest: PV spinlocks enabled May 27 17:36:48.861391 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 17:36:48.861400 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:36:48.861410 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:36:48.861418 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:36:48.861426 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:36:48.861434 kernel: Fallback order for Node 0: 0 May 27 17:36:48.861442 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 May 27 17:36:48.861449 kernel: Policy zone: DMA32 May 27 17:36:48.861457 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:36:48.861465 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 17:36:48.861473 kernel: ftrace: allocating 40081 entries in 157 pages May 27 17:36:48.861483 kernel: ftrace: allocated 157 pages with 5 groups May 27 17:36:48.861490 kernel: Dynamic Preempt: voluntary May 27 17:36:48.861498 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:36:48.861506 kernel: rcu: RCU event tracing is enabled. May 27 17:36:48.861528 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 17:36:48.861536 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:36:48.861544 kernel: Rude variant of Tasks RCU enabled. May 27 17:36:48.861551 kernel: Tracing variant of Tasks RCU enabled. May 27 17:36:48.861559 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:36:48.861570 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 17:36:48.861578 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:36:48.861585 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:36:48.861596 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:36:48.861604 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 17:36:48.861611 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:36:48.861619 kernel: Console: colour dummy device 80x25 May 27 17:36:48.861627 kernel: printk: legacy console [ttyS0] enabled May 27 17:36:48.861635 kernel: ACPI: Core revision 20240827 May 27 17:36:48.861645 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 17:36:48.861653 kernel: APIC: Switch to symmetric I/O mode setup May 27 17:36:48.861661 kernel: x2apic enabled May 27 17:36:48.861669 kernel: APIC: Switched APIC routing to: physical x2apic May 27 17:36:48.861677 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 17:36:48.861685 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 17:36:48.861692 kernel: kvm-guest: setup PV IPIs May 27 17:36:48.861700 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 17:36:48.861708 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 17:36:48.861718 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 17:36:48.861726 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 17:36:48.861734 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 17:36:48.861742 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 17:36:48.861750 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 17:36:48.861764 kernel: Spectre V2 : Mitigation: Retpolines May 27 17:36:48.861773 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 17:36:48.861780 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 17:36:48.861788 kernel: RETBleed: Mitigation: untrained return thunk May 27 17:36:48.861798 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 17:36:48.861806 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 17:36:48.861814 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 17:36:48.861823 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 17:36:48.861831 kernel: x86/bugs: return thunk changed May 27 17:36:48.861839 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 17:36:48.861846 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 17:36:48.861854 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 17:36:48.861864 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 17:36:48.861873 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 17:36:48.861881 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 17:36:48.861889 kernel: Freeing SMP alternatives memory: 32K May 27 17:36:48.861896 kernel: pid_max: default: 32768 minimum: 301 May 27 17:36:48.861904 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:36:48.861912 kernel: landlock: Up and running. May 27 17:36:48.861919 kernel: SELinux: Initializing. May 27 17:36:48.861927 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:36:48.861937 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:36:48.861945 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 17:36:48.861953 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 17:36:48.861961 kernel: ... version: 0 May 27 17:36:48.861968 kernel: ... bit width: 48 May 27 17:36:48.861978 kernel: ... generic registers: 6 May 27 17:36:48.861986 kernel: ... value mask: 0000ffffffffffff May 27 17:36:48.861994 kernel: ... max period: 00007fffffffffff May 27 17:36:48.862002 kernel: ... fixed-purpose events: 0 May 27 17:36:48.862012 kernel: ... event mask: 000000000000003f May 27 17:36:48.862020 kernel: signal: max sigframe size: 1776 May 27 17:36:48.862027 kernel: rcu: Hierarchical SRCU implementation. May 27 17:36:48.862035 kernel: rcu: Max phase no-delay instances is 400. May 27 17:36:48.862043 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:36:48.862051 kernel: smp: Bringing up secondary CPUs ... May 27 17:36:48.862059 kernel: smpboot: x86: Booting SMP configuration: May 27 17:36:48.862066 kernel: .... node #0, CPUs: #1 #2 #3 May 27 17:36:48.862074 kernel: smp: Brought up 1 node, 4 CPUs May 27 17:36:48.862082 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 17:36:48.862093 kernel: Memory: 2409212K/2552216K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 137064K reserved, 0K cma-reserved) May 27 17:36:48.862100 kernel: devtmpfs: initialized May 27 17:36:48.862108 kernel: x86/mm: Memory block size: 128MB May 27 17:36:48.862116 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) May 27 17:36:48.862124 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) May 27 17:36:48.862132 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:36:48.862140 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 17:36:48.862147 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:36:48.862157 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:36:48.862165 kernel: audit: initializing netlink subsys (disabled) May 27 17:36:48.862173 kernel: audit: type=2000 audit(1748367406.325:1): state=initialized audit_enabled=0 res=1 May 27 17:36:48.862181 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:36:48.862189 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 17:36:48.862197 kernel: cpuidle: using governor menu May 27 17:36:48.862205 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:36:48.862212 kernel: dca service started, version 1.12.1 May 27 17:36:48.862220 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 27 17:36:48.862230 kernel: PCI: Using configuration type 1 for base access May 27 17:36:48.862238 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 17:36:48.862246 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:36:48.862254 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:36:48.862262 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:36:48.862270 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:36:48.862277 kernel: ACPI: Added _OSI(Module Device) May 27 17:36:48.862285 kernel: ACPI: Added _OSI(Processor Device) May 27 17:36:48.862293 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:36:48.862303 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:36:48.862311 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:36:48.862318 kernel: ACPI: Interpreter enabled May 27 17:36:48.862326 kernel: ACPI: PM: (supports S0 S5) May 27 17:36:48.862334 kernel: ACPI: Using IOAPIC for interrupt routing May 27 17:36:48.862342 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 17:36:48.862349 kernel: PCI: Using E820 reservations for host bridge windows May 27 17:36:48.862359 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 17:36:48.862367 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 17:36:48.862586 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 17:36:48.862715 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 17:36:48.862846 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 17:36:48.862857 kernel: PCI host bridge to bus 0000:00 May 27 17:36:48.862983 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 17:36:48.863094 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 17:36:48.863212 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 17:36:48.863322 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 27 17:36:48.863432 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 27 17:36:48.863558 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 27 17:36:48.863671 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 17:36:48.863822 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 17:36:48.863965 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 17:36:48.864114 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 27 17:36:48.864240 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 27 17:36:48.864361 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 17:36:48.864480 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 17:36:48.864638 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 17:36:48.864793 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 27 17:36:48.864917 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 27 17:36:48.865046 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 27 17:36:48.865176 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 17:36:48.865298 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 27 17:36:48.865419 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 27 17:36:48.865562 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 27 17:36:48.865694 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 17:36:48.865871 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 27 17:36:48.866038 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 27 17:36:48.866161 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 27 17:36:48.866281 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 27 17:36:48.866410 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 17:36:48.866557 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 17:36:48.866695 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 17:36:48.866831 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 27 17:36:48.866952 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 27 17:36:48.867115 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 17:36:48.867263 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 27 17:36:48.867275 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 17:36:48.867294 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 17:36:48.867311 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 17:36:48.867324 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 17:36:48.867332 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 17:36:48.867340 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 17:36:48.867348 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 17:36:48.867356 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 17:36:48.867367 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 17:36:48.867375 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 17:36:48.867383 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 17:36:48.867391 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 17:36:48.867401 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 17:36:48.867409 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 17:36:48.867417 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 17:36:48.867425 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 17:36:48.867432 kernel: iommu: Default domain type: Translated May 27 17:36:48.867440 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 17:36:48.867448 kernel: efivars: Registered efivars operations May 27 17:36:48.867456 kernel: PCI: Using ACPI for IRQ routing May 27 17:36:48.867464 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 17:36:48.867474 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] May 27 17:36:48.867482 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] May 27 17:36:48.867489 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] May 27 17:36:48.867497 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] May 27 17:36:48.867505 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] May 27 17:36:48.867654 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 17:36:48.867783 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 17:36:48.867905 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 17:36:48.867916 kernel: vgaarb: loaded May 27 17:36:48.867927 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 17:36:48.867935 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 17:36:48.867943 kernel: clocksource: Switched to clocksource kvm-clock May 27 17:36:48.867951 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:36:48.867959 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:36:48.867967 kernel: pnp: PnP ACPI init May 27 17:36:48.868099 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 27 17:36:48.868110 kernel: pnp: PnP ACPI: found 6 devices May 27 17:36:48.868121 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 17:36:48.868129 kernel: NET: Registered PF_INET protocol family May 27 17:36:48.868137 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:36:48.868145 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:36:48.868153 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:36:48.868161 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:36:48.868169 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:36:48.868178 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:36:48.868186 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:36:48.868196 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:36:48.868204 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:36:48.868212 kernel: NET: Registered PF_XDP protocol family May 27 17:36:48.868341 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 27 17:36:48.868498 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 27 17:36:48.868658 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 17:36:48.868780 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 17:36:48.868890 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 17:36:48.869006 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 27 17:36:48.869114 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 27 17:36:48.869224 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 27 17:36:48.869235 kernel: PCI: CLS 0 bytes, default 64 May 27 17:36:48.869243 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 17:36:48.869251 kernel: Initialise system trusted keyrings May 27 17:36:48.869259 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:36:48.869267 kernel: Key type asymmetric registered May 27 17:36:48.869278 kernel: Asymmetric key parser 'x509' registered May 27 17:36:48.869301 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 17:36:48.869311 kernel: io scheduler mq-deadline registered May 27 17:36:48.869319 kernel: io scheduler kyber registered May 27 17:36:48.869327 kernel: io scheduler bfq registered May 27 17:36:48.869335 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 17:36:48.869344 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 17:36:48.869352 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 17:36:48.869361 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 17:36:48.869369 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:36:48.869379 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:36:48.869388 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 17:36:48.869396 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 17:36:48.869416 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 17:36:48.869600 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 17:36:48.869615 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 17:36:48.869734 kernel: rtc_cmos 00:04: registered as rtc0 May 27 17:36:48.869858 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T17:36:48 UTC (1748367408) May 27 17:36:48.869977 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 27 17:36:48.869988 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 17:36:48.869996 kernel: efifb: probing for efifb May 27 17:36:48.870004 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 27 17:36:48.870012 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 27 17:36:48.870020 kernel: efifb: scrolling: redraw May 27 17:36:48.870028 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 17:36:48.870036 kernel: Console: switching to colour frame buffer device 160x50 May 27 17:36:48.870047 kernel: fb0: EFI VGA frame buffer device May 27 17:36:48.870056 kernel: pstore: Using crash dump compression: deflate May 27 17:36:48.870066 kernel: pstore: Registered efi_pstore as persistent store backend May 27 17:36:48.870074 kernel: NET: Registered PF_INET6 protocol family May 27 17:36:48.870082 kernel: Segment Routing with IPv6 May 27 17:36:48.870090 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:36:48.870100 kernel: NET: Registered PF_PACKET protocol family May 27 17:36:48.870109 kernel: Key type dns_resolver registered May 27 17:36:48.870117 kernel: IPI shorthand broadcast: enabled May 27 17:36:48.870125 kernel: sched_clock: Marking stable (3762002802, 186070611)->(3969981874, -21908461) May 27 17:36:48.870133 kernel: registered taskstats version 1 May 27 17:36:48.870141 kernel: Loading compiled-in X.509 certificates May 27 17:36:48.870150 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 17:36:48.870158 kernel: Demotion targets for Node 0: null May 27 17:36:48.870166 kernel: Key type .fscrypt registered May 27 17:36:48.870176 kernel: Key type fscrypt-provisioning registered May 27 17:36:48.870184 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:36:48.870193 kernel: ima: Allocated hash algorithm: sha1 May 27 17:36:48.870201 kernel: ima: No architecture policies found May 27 17:36:48.870209 kernel: clk: Disabling unused clocks May 27 17:36:48.870217 kernel: Warning: unable to open an initial console. May 27 17:36:48.870228 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 17:36:48.870237 kernel: Write protecting the kernel read-only data: 24576k May 27 17:36:48.870247 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 17:36:48.870257 kernel: Run /init as init process May 27 17:36:48.870265 kernel: with arguments: May 27 17:36:48.870273 kernel: /init May 27 17:36:48.870282 kernel: with environment: May 27 17:36:48.870289 kernel: HOME=/ May 27 17:36:48.870297 kernel: TERM=linux May 27 17:36:48.870306 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:36:48.870318 systemd[1]: Successfully made /usr/ read-only. May 27 17:36:48.870332 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:36:48.870342 systemd[1]: Detected virtualization kvm. May 27 17:36:48.870350 systemd[1]: Detected architecture x86-64. May 27 17:36:48.870358 systemd[1]: Running in initrd. May 27 17:36:48.870367 systemd[1]: No hostname configured, using default hostname. May 27 17:36:48.870376 systemd[1]: Hostname set to . May 27 17:36:48.870385 systemd[1]: Initializing machine ID from VM UUID. May 27 17:36:48.870395 systemd[1]: Queued start job for default target initrd.target. May 27 17:36:48.870404 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:36:48.870413 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:36:48.870423 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:36:48.870431 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:36:48.870440 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:36:48.870450 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:36:48.870462 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:36:48.870471 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:36:48.870480 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:36:48.870488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:36:48.870497 systemd[1]: Reached target paths.target - Path Units. May 27 17:36:48.870505 systemd[1]: Reached target slices.target - Slice Units. May 27 17:36:48.870531 systemd[1]: Reached target swap.target - Swaps. May 27 17:36:48.870540 systemd[1]: Reached target timers.target - Timer Units. May 27 17:36:48.870548 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:36:48.870560 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:36:48.870568 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:36:48.870577 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:36:48.870586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:36:48.870594 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:36:48.870603 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:36:48.870612 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:36:48.870621 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:36:48.870632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:36:48.870641 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:36:48.870650 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:36:48.870659 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:36:48.870667 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:36:48.870676 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:36:48.870685 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:36:48.870693 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:36:48.870705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:36:48.870713 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:36:48.870743 systemd-journald[220]: Collecting audit messages is disabled. May 27 17:36:48.870774 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:36:48.870783 systemd-journald[220]: Journal started May 27 17:36:48.870802 systemd-journald[220]: Runtime Journal (/run/log/journal/09e9585122c8428eabb18bdb6de7cff8) is 6M, max 48.2M, 42.2M free. May 27 17:36:48.867339 systemd-modules-load[221]: Inserted module 'overlay' May 27 17:36:48.872698 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:36:48.874850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:36:48.878568 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:36:48.881863 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:36:48.889659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:36:48.891851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:36:48.897540 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:36:48.901629 kernel: Bridge firewalling registered May 27 17:36:48.900603 systemd-modules-load[221]: Inserted module 'br_netfilter' May 27 17:36:48.902151 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:36:48.902558 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:36:48.905131 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:36:48.908171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:36:48.909386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:36:48.918057 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:36:48.920773 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:36:48.922955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:36:48.936366 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:36:48.951316 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:36:48.976090 systemd-resolved[262]: Positive Trust Anchors: May 27 17:36:48.976116 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:36:48.976148 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:36:48.978841 systemd-resolved[262]: Defaulting to hostname 'linux'. May 27 17:36:48.980043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:36:48.986540 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:36:49.078570 kernel: SCSI subsystem initialized May 27 17:36:49.087549 kernel: Loading iSCSI transport class v2.0-870. May 27 17:36:49.098549 kernel: iscsi: registered transport (tcp) May 27 17:36:49.120534 kernel: iscsi: registered transport (qla4xxx) May 27 17:36:49.120562 kernel: QLogic iSCSI HBA Driver May 27 17:36:49.141648 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:36:49.165067 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:36:49.165672 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:36:49.233680 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:36:49.235622 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:36:49.309571 kernel: raid6: avx2x4 gen() 28369 MB/s May 27 17:36:49.326558 kernel: raid6: avx2x2 gen() 26698 MB/s May 27 17:36:49.343651 kernel: raid6: avx2x1 gen() 24648 MB/s May 27 17:36:49.343681 kernel: raid6: using algorithm avx2x4 gen() 28369 MB/s May 27 17:36:49.361624 kernel: raid6: .... xor() 7943 MB/s, rmw enabled May 27 17:36:49.361650 kernel: raid6: using avx2x2 recovery algorithm May 27 17:36:49.383547 kernel: xor: automatically using best checksumming function avx May 27 17:36:49.551577 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:36:49.560989 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:36:49.563161 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:36:49.597699 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 27 17:36:49.603503 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:36:49.604546 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:36:49.631556 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation May 27 17:36:49.664023 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:36:49.667164 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:36:49.747442 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:36:49.751336 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:36:49.812555 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 17:36:49.813542 kernel: libata version 3.00 loaded. May 27 17:36:49.819536 kernel: cryptd: max_cpu_qlen set to 1000 May 27 17:36:49.821551 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 17:36:49.825099 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 17:36:49.825283 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:36:49.826535 kernel: ahci 0000:00:1f.2: version 3.0 May 27 17:36:49.826127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:36:49.829450 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:36:49.834100 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:36:49.834116 kernel: GPT:9289727 != 19775487 May 27 17:36:49.834126 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:36:49.834141 kernel: GPT:9289727 != 19775487 May 27 17:36:49.834157 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:36:49.834167 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:36:49.835915 kernel: AES CTR mode by8 optimization enabled May 27 17:36:49.836720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:36:49.841302 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 17:36:49.841325 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 17:36:49.841495 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 17:36:49.842499 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 17:36:49.848468 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:36:49.848945 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:36:49.855012 kernel: scsi host0: ahci May 27 17:36:49.849051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:36:49.860260 kernel: scsi host1: ahci May 27 17:36:49.861142 kernel: scsi host2: ahci May 27 17:36:49.861900 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:36:49.875546 kernel: scsi host3: ahci May 27 17:36:49.878596 kernel: scsi host4: ahci May 27 17:36:49.881806 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 17:36:49.888740 kernel: scsi host5: ahci May 27 17:36:49.888928 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 27 17:36:49.888940 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 27 17:36:49.888950 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 27 17:36:49.888960 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 27 17:36:49.888970 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 27 17:36:49.888980 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 27 17:36:49.897791 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 17:36:49.900685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:36:49.927375 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 17:36:49.928482 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 17:36:49.939488 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:36:49.940607 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:36:49.970421 disk-uuid[635]: Primary Header is updated. May 27 17:36:49.970421 disk-uuid[635]: Secondary Entries is updated. May 27 17:36:49.970421 disk-uuid[635]: Secondary Header is updated. May 27 17:36:49.973848 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:36:49.979543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:36:50.191546 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 17:36:50.191620 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 17:36:50.192544 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 17:36:50.193608 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 17:36:50.193688 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 17:36:50.194895 kernel: ata3.00: applying bridge limits May 27 17:36:50.195533 kernel: ata3.00: configured for UDMA/100 May 27 17:36:50.197547 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 17:36:50.199538 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 17:36:50.199557 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 17:36:50.258119 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 17:36:50.258366 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 17:36:50.270545 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 17:36:50.639803 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:36:50.641080 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:36:50.642427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:36:50.642932 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:36:50.644195 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:36:50.672926 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:36:50.982254 disk-uuid[636]: The operation has completed successfully. May 27 17:36:50.983656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:36:51.015690 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:36:51.015823 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:36:51.047216 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:36:51.065195 sh[664]: Success May 27 17:36:51.084589 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:36:51.084653 kernel: device-mapper: uevent: version 1.0.3 May 27 17:36:51.085778 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:36:51.095543 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 17:36:51.129959 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:36:51.133965 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:36:51.153665 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:36:51.159350 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:36:51.159379 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (676) May 27 17:36:51.161626 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 17:36:51.161652 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 17:36:51.161663 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:36:51.167306 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:36:51.169825 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:36:51.172260 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:36:51.175245 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:36:51.177406 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:36:51.208570 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (709) May 27 17:36:51.210867 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:36:51.210895 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:36:51.210907 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:36:51.218533 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:36:51.219190 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:36:51.220498 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:36:51.420933 ignition[749]: Ignition 2.21.0 May 27 17:36:51.420950 ignition[749]: Stage: fetch-offline May 27 17:36:51.420986 ignition[749]: no configs at "/usr/lib/ignition/base.d" May 27 17:36:51.420996 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:36:51.421098 ignition[749]: parsed url from cmdline: "" May 27 17:36:51.421101 ignition[749]: no config URL provided May 27 17:36:51.421106 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:36:51.421116 ignition[749]: no config at "/usr/lib/ignition/user.ign" May 27 17:36:51.421141 ignition[749]: op(1): [started] loading QEMU firmware config module May 27 17:36:51.421147 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 17:36:51.435276 ignition[749]: op(1): [finished] loading QEMU firmware config module May 27 17:36:51.435305 ignition[749]: QEMU firmware config was not found. Ignoring... May 27 17:36:51.447067 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:36:51.450355 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:36:51.499622 ignition[749]: parsing config with SHA512: 023bce65ea9fdcf37d36657688b81fb0ff67b9fd12f6d93d3ecff1cf204ca73a008875b6f6fd2442b2f385f591dd9e713b2a6053ddcea20f69754912bf9aed15 May 27 17:36:51.508753 unknown[749]: fetched base config from "system" May 27 17:36:51.508769 unknown[749]: fetched user config from "qemu" May 27 17:36:51.509101 ignition[749]: fetch-offline: fetch-offline passed May 27 17:36:51.509177 ignition[749]: Ignition finished successfully May 27 17:36:51.511961 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:36:51.523749 systemd-networkd[853]: lo: Link UP May 27 17:36:51.523759 systemd-networkd[853]: lo: Gained carrier May 27 17:36:51.525640 systemd-networkd[853]: Enumeration completed May 27 17:36:51.525806 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:36:51.526101 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:36:51.526107 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:36:51.526543 systemd[1]: Reached target network.target - Network. May 27 17:36:51.527683 systemd-networkd[853]: eth0: Link UP May 27 17:36:51.527688 systemd-networkd[853]: eth0: Gained carrier May 27 17:36:51.527707 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:36:51.529636 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 17:36:51.530849 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:36:51.553618 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:36:51.645740 ignition[857]: Ignition 2.21.0 May 27 17:36:51.645754 ignition[857]: Stage: kargs May 27 17:36:51.646225 ignition[857]: no configs at "/usr/lib/ignition/base.d" May 27 17:36:51.646240 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:36:51.647367 ignition[857]: kargs: kargs passed May 27 17:36:51.647430 ignition[857]: Ignition finished successfully May 27 17:36:51.656420 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:36:51.658840 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:36:51.707601 ignition[866]: Ignition 2.21.0 May 27 17:36:51.707616 ignition[866]: Stage: disks May 27 17:36:51.707817 ignition[866]: no configs at "/usr/lib/ignition/base.d" May 27 17:36:51.707832 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:36:51.713115 ignition[866]: disks: disks passed May 27 17:36:51.713916 ignition[866]: Ignition finished successfully May 27 17:36:51.717829 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:36:51.718583 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:36:51.722812 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:36:51.723350 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:36:51.723841 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:36:51.724178 systemd[1]: Reached target basic.target - Basic System. May 27 17:36:51.726251 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:36:51.758896 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 17:36:51.767033 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:36:51.769508 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:36:51.923545 kernel: EXT4-fs (vda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 17:36:51.924111 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:36:51.925712 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:36:51.928958 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:36:51.931102 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:36:51.932130 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 17:36:51.932221 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:36:51.932255 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:36:51.948409 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:36:51.950651 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:36:51.955542 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (885) May 27 17:36:51.959246 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:36:51.959287 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:36:51.959303 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:36:51.965271 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:36:52.001573 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:36:52.032018 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory May 27 17:36:52.037103 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:36:52.041968 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:36:52.158745 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:36:52.161576 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:36:52.163331 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:36:52.196563 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:36:52.197968 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:36:52.210663 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:36:52.234133 ignition[999]: INFO : Ignition 2.21.0 May 27 17:36:52.236417 ignition[999]: INFO : Stage: mount May 27 17:36:52.236417 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:36:52.236417 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:36:52.239528 ignition[999]: INFO : mount: mount passed May 27 17:36:52.239528 ignition[999]: INFO : Ignition finished successfully May 27 17:36:52.244193 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:36:52.246083 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:36:52.275406 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:36:52.302555 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1011) May 27 17:36:52.302623 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:36:52.304933 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:36:52.304970 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:36:52.309641 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:36:52.354602 ignition[1028]: INFO : Ignition 2.21.0 May 27 17:36:52.354602 ignition[1028]: INFO : Stage: files May 27 17:36:52.356965 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:36:52.356965 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:36:52.359816 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping May 27 17:36:52.359816 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:36:52.359816 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:36:52.364917 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:36:52.364917 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:36:52.364917 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:36:52.364917 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 17:36:52.364917 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 17:36:52.362138 unknown[1028]: wrote ssh authorized keys file for user: core May 27 17:36:52.460708 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:36:52.638749 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 17:36:52.638749 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 17:36:52.643106 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:36:52.644914 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:36:52.646722 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:36:52.648479 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:36:52.650351 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:36:52.652062 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:36:52.653951 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:36:52.661347 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:36:52.663535 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:36:52.665463 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:36:52.668185 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:36:52.668185 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:36:52.672991 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 17:36:52.824754 systemd-networkd[853]: eth0: Gained IPv6LL May 27 17:36:53.406463 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 17:36:53.679130 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:36:53.679130 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 17:36:53.683688 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:36:53.685988 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:36:53.685988 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 17:36:53.685988 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 17:36:53.685988 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:36:53.685988 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:36:53.685988 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 17:36:53.685988 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 17:36:53.706650 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:36:53.711268 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:36:53.713184 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 17:36:53.713184 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 17:36:53.713184 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:36:53.713184 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:36:53.713184 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:36:53.713184 ignition[1028]: INFO : files: files passed May 27 17:36:53.713184 ignition[1028]: INFO : Ignition finished successfully May 27 17:36:53.719287 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:36:53.722072 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:36:53.726215 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:36:53.735854 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:36:53.735972 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:36:53.742357 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory May 27 17:36:53.746437 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:36:53.746437 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:36:53.749586 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:36:53.753620 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:36:53.754272 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:36:53.757378 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:36:53.816449 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:36:53.816611 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:36:53.817304 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:36:53.820124 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:36:53.822080 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:36:53.824990 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:36:53.856432 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:36:53.858310 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:36:53.882064 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:36:53.882589 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:36:53.883125 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:36:53.883488 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:36:53.883648 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:36:53.884315 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:36:53.884820 systemd[1]: Stopped target basic.target - Basic System. May 27 17:36:53.885141 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:36:53.885475 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:36:53.885998 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:36:53.886324 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:36:53.886836 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:36:53.887160 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:36:53.887501 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:36:53.888003 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:36:53.888323 systemd[1]: Stopped target swap.target - Swaps. May 27 17:36:53.888806 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:36:53.888914 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:36:53.913305 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:36:53.913963 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:36:53.914245 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:36:53.917429 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:36:53.917901 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:36:53.918012 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:36:53.920470 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:36:53.920599 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:36:53.921058 systemd[1]: Stopped target paths.target - Path Units. May 27 17:36:53.921306 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:36:53.925568 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:36:53.927017 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:36:53.927345 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:36:53.927905 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:36:53.927990 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:36:53.932955 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:36:53.933066 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:36:53.935019 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:36:53.935166 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:36:53.937327 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:36:53.937463 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:36:53.941085 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:36:53.941824 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:36:53.941974 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:36:53.943161 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:36:53.946140 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:36:53.946302 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:36:53.948173 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:36:53.948323 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:36:53.956763 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:36:53.956902 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:36:53.976621 ignition[1084]: INFO : Ignition 2.21.0 May 27 17:36:53.976621 ignition[1084]: INFO : Stage: umount May 27 17:36:53.978585 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:36:53.978585 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:36:53.982231 ignition[1084]: INFO : umount: umount passed May 27 17:36:53.983434 ignition[1084]: INFO : Ignition finished successfully May 27 17:36:53.985655 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:36:53.985835 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:36:53.986646 systemd[1]: Stopped target network.target - Network. May 27 17:36:53.989138 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:36:53.989216 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:36:53.989542 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:36:53.989595 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:36:53.990032 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:36:53.990087 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:36:53.990357 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:36:53.990399 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:36:53.990972 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:36:53.991301 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:36:53.992810 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:36:54.014052 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:36:54.014230 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:36:54.018894 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:36:54.019169 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:36:54.019304 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:36:54.023666 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:36:54.024476 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:36:54.026074 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:36:54.026133 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:36:54.027612 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:36:54.031113 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:36:54.031177 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:36:54.032883 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:36:54.032934 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:36:54.035240 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:36:54.035292 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:36:54.037578 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:36:54.037653 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:36:54.041673 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:36:54.043022 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:36:54.043090 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:36:54.067926 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:36:54.068203 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:36:54.069227 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:36:54.069305 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:36:54.072497 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:36:54.072562 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:36:54.073037 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:36:54.073112 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:36:54.073996 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:36:54.074062 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:36:54.074812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:36:54.074894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:36:54.076983 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:36:54.086264 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:36:54.086343 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:36:54.091817 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:36:54.091868 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:36:54.095781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:36:54.095863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:36:54.099562 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:36:54.099620 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:36:54.099700 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:36:54.100106 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:36:54.100228 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:36:54.102029 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:36:54.102137 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:36:54.162363 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:36:54.162535 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:36:54.163576 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:36:54.165562 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:36:54.165620 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:36:54.170760 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:36:54.205171 systemd[1]: Switching root. May 27 17:36:54.240316 systemd-journald[220]: Journal stopped May 27 17:36:55.689536 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 27 17:36:55.689627 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:36:55.689653 kernel: SELinux: policy capability open_perms=1 May 27 17:36:55.689669 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:36:55.689684 kernel: SELinux: policy capability always_check_network=0 May 27 17:36:55.689698 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:36:55.689713 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:36:55.689728 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:36:55.689745 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:36:55.689760 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:36:55.689779 kernel: audit: type=1403 audit(1748367414.862:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:36:55.689802 systemd[1]: Successfully loaded SELinux policy in 53.301ms. May 27 17:36:55.689838 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.799ms. May 27 17:36:55.689855 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:36:55.689872 systemd[1]: Detected virtualization kvm. May 27 17:36:55.689888 systemd[1]: Detected architecture x86-64. May 27 17:36:55.689903 systemd[1]: Detected first boot. May 27 17:36:55.689919 systemd[1]: Initializing machine ID from VM UUID. May 27 17:36:55.689935 zram_generator::config[1130]: No configuration found. May 27 17:36:55.689956 kernel: Guest personality initialized and is inactive May 27 17:36:55.689970 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 17:36:55.689985 kernel: Initialized host personality May 27 17:36:55.690000 kernel: NET: Registered PF_VSOCK protocol family May 27 17:36:55.690015 systemd[1]: Populated /etc with preset unit settings. May 27 17:36:55.690032 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:36:55.690048 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:36:55.690064 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:36:55.690087 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:36:55.690106 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:36:55.690124 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:36:55.690140 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:36:55.690156 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:36:55.690173 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:36:55.690190 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:36:55.690206 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:36:55.690227 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:36:55.690243 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:36:55.690259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:36:55.690275 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:36:55.690291 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:36:55.690307 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:36:55.690323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:36:55.690339 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:36:55.690359 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:36:55.690375 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:36:55.690391 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:36:55.690407 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:36:55.690423 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:36:55.690439 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:36:55.690455 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:36:55.690474 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:36:55.690489 systemd[1]: Reached target slices.target - Slice Units. May 27 17:36:55.690509 systemd[1]: Reached target swap.target - Swaps. May 27 17:36:55.690555 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:36:55.690572 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:36:55.690588 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:36:55.690612 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:36:55.690629 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:36:55.690644 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:36:55.690660 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:36:55.690676 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:36:55.690697 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:36:55.690714 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:36:55.690730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:36:55.690746 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:36:55.690762 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:36:55.690778 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:36:55.690795 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:36:55.690812 systemd[1]: Reached target machines.target - Containers. May 27 17:36:55.690828 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:36:55.690847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:36:55.690864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:36:55.690880 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:36:55.690896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:36:55.690912 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:36:55.690934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:36:55.690950 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:36:55.690966 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:36:55.690986 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:36:55.691002 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:36:55.691018 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:36:55.691034 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:36:55.691050 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:36:55.691068 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:36:55.691084 kernel: fuse: init (API version 7.41) May 27 17:36:55.691101 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:36:55.691117 kernel: loop: module loaded May 27 17:36:55.691136 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:36:55.691152 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:36:55.691169 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:36:55.691185 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:36:55.691200 kernel: ACPI: bus type drm_connector registered May 27 17:36:55.691216 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:36:55.691236 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:36:55.691252 systemd[1]: Stopped verity-setup.service. May 27 17:36:55.691269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:36:55.691288 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:36:55.691307 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:36:55.691324 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:36:55.691340 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:36:55.691386 systemd-journald[1208]: Collecting audit messages is disabled. May 27 17:36:55.691410 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:36:55.691425 systemd-journald[1208]: Journal started May 27 17:36:55.691447 systemd-journald[1208]: Runtime Journal (/run/log/journal/09e9585122c8428eabb18bdb6de7cff8) is 6M, max 48.2M, 42.2M free. May 27 17:36:55.424955 systemd[1]: Queued start job for default target multi-user.target. May 27 17:36:55.446633 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 17:36:55.447123 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:36:55.693844 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:36:55.694715 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:36:55.696076 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:36:55.697729 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:36:55.699338 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:36:55.699736 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:36:55.701242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:36:55.701484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:36:55.703057 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:36:55.703278 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:36:55.704711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:36:55.704926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:36:55.706538 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:36:55.706787 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:36:55.708212 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:36:55.708428 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:36:55.709919 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:36:55.711393 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:36:55.713021 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:36:55.714668 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:36:55.731236 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:36:55.734714 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:36:55.737637 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:36:55.738892 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:36:55.738936 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:36:55.741038 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:36:55.755656 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:36:55.756848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:36:55.758315 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:36:55.761535 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:36:55.762849 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:36:55.764283 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:36:55.765480 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:36:55.777005 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:36:55.779671 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:36:55.782647 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:36:55.786035 systemd-journald[1208]: Time spent on flushing to /var/log/journal/09e9585122c8428eabb18bdb6de7cff8 is 19.109ms for 1039 entries. May 27 17:36:55.786035 systemd-journald[1208]: System Journal (/var/log/journal/09e9585122c8428eabb18bdb6de7cff8) is 8M, max 195.6M, 187.6M free. May 27 17:36:55.824716 systemd-journald[1208]: Received client request to flush runtime journal. May 27 17:36:55.824783 kernel: loop0: detected capacity change from 0 to 113872 May 27 17:36:55.787129 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:36:55.789073 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:36:55.790630 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:36:55.795330 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:36:55.798496 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:36:55.803533 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:36:55.824793 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:36:55.827183 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:36:55.831564 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:36:55.839892 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:36:55.849201 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:36:55.849545 kernel: loop1: detected capacity change from 0 to 146240 May 27 17:36:55.852454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:36:55.881573 kernel: loop2: detected capacity change from 0 to 229808 May 27 17:36:55.888146 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. May 27 17:36:55.888166 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. May 27 17:36:55.896155 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:36:55.909547 kernel: loop3: detected capacity change from 0 to 113872 May 27 17:36:55.918547 kernel: loop4: detected capacity change from 0 to 146240 May 27 17:36:55.934569 kernel: loop5: detected capacity change from 0 to 229808 May 27 17:36:55.944994 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 17:36:55.945582 (sd-merge)[1272]: Merged extensions into '/usr'. May 27 17:36:55.951012 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:36:55.951032 systemd[1]: Reloading... May 27 17:36:56.024545 zram_generator::config[1301]: No configuration found. May 27 17:36:56.119599 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:36:56.126943 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:36:56.209703 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:36:56.210274 systemd[1]: Reloading finished in 258 ms. May 27 17:36:56.254194 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:36:56.255965 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:36:56.273434 systemd[1]: Starting ensure-sysext.service... May 27 17:36:56.275579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:36:56.288595 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... May 27 17:36:56.288607 systemd[1]: Reloading... May 27 17:36:56.299634 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:36:56.299677 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:36:56.299969 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:36:56.300227 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:36:56.301178 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:36:56.301448 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 27 17:36:56.301572 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 27 17:36:56.314644 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:36:56.314658 systemd-tmpfiles[1337]: Skipping /boot May 27 17:36:56.329801 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:36:56.329895 systemd-tmpfiles[1337]: Skipping /boot May 27 17:36:56.354593 zram_generator::config[1364]: No configuration found. May 27 17:36:56.445222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:36:56.525353 systemd[1]: Reloading finished in 236 ms. May 27 17:36:56.547024 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:36:56.562392 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:36:56.571832 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:36:56.574350 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:36:56.577061 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:36:56.588280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:36:56.592820 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:36:56.596467 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:36:56.601038 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:36:56.601283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:36:56.607557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:36:56.610816 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:36:56.614492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:36:56.616006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:36:56.616292 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:36:56.620628 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:36:56.621692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:36:56.624433 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:36:56.626725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:36:56.627071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:36:56.629178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:36:56.637888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:36:56.640645 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:36:56.642595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:36:56.647544 systemd-udevd[1408]: Using default interface naming scheme 'v255'. May 27 17:36:56.656669 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:36:56.656976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:36:56.660116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:36:56.663989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:36:56.666112 augenrules[1438]: No rules May 27 17:36:56.670782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:36:56.672089 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:36:56.672203 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:36:56.674606 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:36:56.675667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:36:56.677039 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:36:56.677290 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:36:56.678995 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:36:56.682306 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:36:56.684164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:36:56.684911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:36:56.686549 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:36:56.688130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:36:56.688335 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:36:56.690944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:36:56.692747 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:36:56.696773 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:36:56.698598 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:36:56.725205 systemd[1]: Finished ensure-sysext.service. May 27 17:36:56.732557 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:36:56.734953 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:36:56.736264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:36:56.738700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:36:56.740844 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:36:56.743686 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:36:56.746087 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:36:56.747555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:36:56.747606 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:36:56.758476 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:36:56.762969 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 17:36:56.765001 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:36:56.765038 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:36:56.769683 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:36:56.769969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:36:56.774387 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:36:56.774741 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:36:56.780230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:36:56.782355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:36:56.785142 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:36:56.785389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:36:56.788156 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:36:56.788672 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:36:56.795602 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:36:56.799197 augenrules[1485]: /sbin/augenrules: No change May 27 17:36:56.814679 augenrules[1520]: No rules May 27 17:36:56.816348 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:36:56.816714 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:36:56.849543 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:36:56.849913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:36:56.852779 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:36:56.862545 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 17:36:56.872245 kernel: ACPI: button: Power Button [PWRF] May 27 17:36:56.884006 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:36:56.894327 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 27 17:36:56.894627 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 17:36:56.895013 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 17:36:56.917002 systemd-resolved[1406]: Positive Trust Anchors: May 27 17:36:56.917025 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:36:56.917056 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:36:56.927462 systemd-resolved[1406]: Defaulting to hostname 'linux'. May 27 17:36:56.930014 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:36:56.931359 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:36:56.958598 systemd-networkd[1491]: lo: Link UP May 27 17:36:56.958900 systemd-networkd[1491]: lo: Gained carrier May 27 17:36:56.961616 systemd-networkd[1491]: Enumeration completed May 27 17:36:56.961983 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:36:56.961987 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:36:56.963114 systemd-networkd[1491]: eth0: Link UP May 27 17:36:56.963268 systemd-networkd[1491]: eth0: Gained carrier May 27 17:36:56.963282 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:36:56.977388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:36:56.978813 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:36:56.981704 systemd[1]: Reached target network.target - Network. May 27 17:36:56.985810 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:36:56.988957 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:36:56.996299 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:36:56.997883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:36:57.006350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:36:57.008605 systemd-networkd[1491]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:36:57.012938 kernel: kvm_amd: TSC scaling supported May 27 17:36:57.012976 kernel: kvm_amd: Nested Virtualization enabled May 27 17:36:57.012990 kernel: kvm_amd: Nested Paging enabled May 27 17:36:57.013002 kernel: kvm_amd: LBR virtualization supported May 27 17:36:57.013020 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 17:36:57.014640 kernel: kvm_amd: Virtual GIF supported May 27 17:36:57.027131 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 17:36:57.028958 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:36:58.686872 systemd-resolved[1406]: Clock change detected. Flushing caches. May 27 17:36:58.686993 systemd-timesyncd[1493]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 17:36:58.687430 systemd-timesyncd[1493]: Initial clock synchronization to Tue 2025-05-27 17:36:58.686715 UTC. May 27 17:36:58.697501 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:36:58.755515 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:36:58.757367 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:36:58.760317 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:36:58.761750 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:36:58.763106 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 17:36:58.766567 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:36:58.767780 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:36:58.769043 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:36:58.771315 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:36:58.771349 systemd[1]: Reached target paths.target - Path Units. May 27 17:36:58.772281 systemd[1]: Reached target timers.target - Timer Units. May 27 17:36:58.774132 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:36:58.777294 kernel: EDAC MC: Ver: 3.0.0 May 27 17:36:58.777686 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:36:58.781719 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:36:58.783242 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:36:58.784514 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:36:58.798968 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:36:58.800454 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:36:58.802268 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:36:58.804015 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:36:58.805006 systemd[1]: Reached target basic.target - Basic System. May 27 17:36:58.805991 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:36:58.806021 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:36:58.807028 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:36:58.809396 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:36:58.812198 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:36:58.820322 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:36:58.823674 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:36:58.824741 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:36:58.826215 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 17:36:58.829989 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:36:58.830112 jq[1567]: false May 27 17:36:58.834188 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:36:58.836417 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:36:58.839318 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:36:58.843252 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache May 27 17:36:58.843210 oslogin_cache_refresh[1569]: Refreshing passwd entry cache May 27 17:36:58.846250 extend-filesystems[1568]: Found loop3 May 27 17:36:58.847267 extend-filesystems[1568]: Found loop4 May 27 17:36:58.847267 extend-filesystems[1568]: Found loop5 May 27 17:36:58.847267 extend-filesystems[1568]: Found sr0 May 27 17:36:58.847267 extend-filesystems[1568]: Found vda May 27 17:36:58.847267 extend-filesystems[1568]: Found vda1 May 27 17:36:58.847267 extend-filesystems[1568]: Found vda2 May 27 17:36:58.847267 extend-filesystems[1568]: Found vda3 May 27 17:36:58.847267 extend-filesystems[1568]: Found usr May 27 17:36:58.847267 extend-filesystems[1568]: Found vda4 May 27 17:36:58.847267 extend-filesystems[1568]: Found vda6 May 27 17:36:58.847267 extend-filesystems[1568]: Found vda7 May 27 17:36:58.847267 extend-filesystems[1568]: Found vda9 May 27 17:36:58.847267 extend-filesystems[1568]: Checking size of /dev/vda9 May 27 17:36:58.869163 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 17:36:58.869221 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting May 27 17:36:58.869221 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:36:58.869221 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache May 27 17:36:58.854407 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:36:58.851238 oslogin_cache_refresh[1569]: Failure getting users, quitting May 27 17:36:58.869691 extend-filesystems[1568]: Resized partition /dev/vda9 May 27 17:36:58.858053 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:36:58.851258 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:36:58.870984 extend-filesystems[1582]: resize2fs 1.47.2 (1-Jan-2025) May 27 17:36:58.858659 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:36:58.851301 oslogin_cache_refresh[1569]: Refreshing group entry cache May 27 17:36:58.860909 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:36:58.873651 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:36:58.881178 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:36:58.883871 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:36:58.887192 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:36:58.887623 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:36:58.887876 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:36:58.890791 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:36:58.891941 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting May 27 17:36:58.891040 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:36:58.891986 oslogin_cache_refresh[1569]: Failure getting groups, quitting May 27 17:36:58.892583 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:36:58.892003 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:36:58.894916 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 17:36:58.895188 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 17:36:58.937184 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 17:36:58.937242 update_engine[1581]: I20250527 17:36:58.932288 1581 main.cc:92] Flatcar Update Engine starting May 27 17:36:58.937556 jq[1589]: true May 27 17:36:58.912380 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:36:58.938285 jq[1598]: true May 27 17:36:58.941191 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 17:36:58.941191 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 17:36:58.941191 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 17:36:58.948517 extend-filesystems[1568]: Resized filesystem in /dev/vda9 May 27 17:36:58.953372 tar[1592]: linux-amd64/LICENSE May 27 17:36:58.953372 tar[1592]: linux-amd64/helm May 27 17:36:58.951383 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:36:58.951420 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) May 27 17:36:58.951497 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:36:58.951678 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:36:58.954169 systemd-logind[1577]: New seat seat0. May 27 17:36:58.955755 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:36:58.971486 dbus-daemon[1565]: [system] SELinux support is enabled May 27 17:36:58.972226 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:36:58.976023 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:36:58.976063 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:36:58.978372 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:36:58.978400 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:36:58.984343 dbus-daemon[1565]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 17:36:58.988018 update_engine[1581]: I20250527 17:36:58.987795 1581 update_check_scheduler.cc:74] Next update check in 10m45s May 27 17:36:58.988043 systemd[1]: Started update-engine.service - Update Engine. May 27 17:36:58.993690 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:36:59.006639 bash[1624]: Updated "/home/core/.ssh/authorized_keys" May 27 17:36:59.008238 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:36:59.013418 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:36:59.031854 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:36:59.149043 containerd[1593]: time="2025-05-27T17:36:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:36:59.150147 containerd[1593]: time="2025-05-27T17:36:59.150118165Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:36:59.162947 containerd[1593]: time="2025-05-27T17:36:59.162902641Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.058µs" May 27 17:36:59.162947 containerd[1593]: time="2025-05-27T17:36:59.162937686Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:36:59.163015 containerd[1593]: time="2025-05-27T17:36:59.162957283Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:36:59.163225 containerd[1593]: time="2025-05-27T17:36:59.163197303Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:36:59.163225 containerd[1593]: time="2025-05-27T17:36:59.163220517Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:36:59.163266 containerd[1593]: time="2025-05-27T17:36:59.163250503Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:36:59.163358 containerd[1593]: time="2025-05-27T17:36:59.163331144Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:36:59.163358 containerd[1593]: time="2025-05-27T17:36:59.163350531Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:36:59.163895 containerd[1593]: time="2025-05-27T17:36:59.163790706Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:36:59.163895 containerd[1593]: time="2025-05-27T17:36:59.163825271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:36:59.163987 containerd[1593]: time="2025-05-27T17:36:59.163840149Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:36:59.164052 containerd[1593]: time="2025-05-27T17:36:59.164034243Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:36:59.164231 containerd[1593]: time="2025-05-27T17:36:59.164213329Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:36:59.164763 containerd[1593]: time="2025-05-27T17:36:59.164740046Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:36:59.165120 containerd[1593]: time="2025-05-27T17:36:59.165089261Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:36:59.165186 containerd[1593]: time="2025-05-27T17:36:59.165172888Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:36:59.165320 containerd[1593]: time="2025-05-27T17:36:59.165302531Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:36:59.167310 containerd[1593]: time="2025-05-27T17:36:59.167293786Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:36:59.167426 containerd[1593]: time="2025-05-27T17:36:59.167411376Z" level=info msg="metadata content store policy set" policy=shared May 27 17:36:59.172740 containerd[1593]: time="2025-05-27T17:36:59.172718519Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:36:59.172817 containerd[1593]: time="2025-05-27T17:36:59.172804009Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:36:59.172888 containerd[1593]: time="2025-05-27T17:36:59.172875383Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:36:59.172938 containerd[1593]: time="2025-05-27T17:36:59.172927040Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:36:59.173002 containerd[1593]: time="2025-05-27T17:36:59.172988736Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:36:59.173059 containerd[1593]: time="2025-05-27T17:36:59.173045202Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:36:59.173140 containerd[1593]: time="2025-05-27T17:36:59.173127386Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:36:59.173199 containerd[1593]: time="2025-05-27T17:36:59.173182930Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:36:59.173260 containerd[1593]: time="2025-05-27T17:36:59.173246880Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:36:59.173306 containerd[1593]: time="2025-05-27T17:36:59.173295461Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:36:59.173357 containerd[1593]: time="2025-05-27T17:36:59.173345044Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:36:59.173405 containerd[1593]: time="2025-05-27T17:36:59.173393855Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:36:59.173556 containerd[1593]: time="2025-05-27T17:36:59.173539809Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:36:59.173622 containerd[1593]: time="2025-05-27T17:36:59.173610642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:36:59.173672 containerd[1593]: time="2025-05-27T17:36:59.173661237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:36:59.173724 containerd[1593]: time="2025-05-27T17:36:59.173713304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:36:59.173777 containerd[1593]: time="2025-05-27T17:36:59.173763789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:36:59.173835 containerd[1593]: time="2025-05-27T17:36:59.173823521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:36:59.173890 containerd[1593]: time="2025-05-27T17:36:59.173878244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:36:59.173936 containerd[1593]: time="2025-05-27T17:36:59.173925643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:36:59.173990 containerd[1593]: time="2025-05-27T17:36:59.173978331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:36:59.174037 containerd[1593]: time="2025-05-27T17:36:59.174025971Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:36:59.174118 containerd[1593]: time="2025-05-27T17:36:59.174090261Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:36:59.174225 containerd[1593]: time="2025-05-27T17:36:59.174209024Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:36:59.174277 containerd[1593]: time="2025-05-27T17:36:59.174267574Z" level=info msg="Start snapshots syncer" May 27 17:36:59.174344 containerd[1593]: time="2025-05-27T17:36:59.174331283Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:36:59.174598 containerd[1593]: time="2025-05-27T17:36:59.174563339Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:36:59.174762 containerd[1593]: time="2025-05-27T17:36:59.174744498Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:36:59.177594 containerd[1593]: time="2025-05-27T17:36:59.177569586Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:36:59.177769 containerd[1593]: time="2025-05-27T17:36:59.177751688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:36:59.177834 containerd[1593]: time="2025-05-27T17:36:59.177822601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:36:59.177906 containerd[1593]: time="2025-05-27T17:36:59.177892221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:36:59.177955 containerd[1593]: time="2025-05-27T17:36:59.177942916Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:36:59.178009 containerd[1593]: time="2025-05-27T17:36:59.177995976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:36:59.178057 containerd[1593]: time="2025-05-27T17:36:59.178046681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:36:59.178133 containerd[1593]: time="2025-05-27T17:36:59.178117804Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:36:59.178223 containerd[1593]: time="2025-05-27T17:36:59.178205990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:36:59.178279 containerd[1593]: time="2025-05-27T17:36:59.178267716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:36:59.178325 containerd[1593]: time="2025-05-27T17:36:59.178314543Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:36:59.178411 containerd[1593]: time="2025-05-27T17:36:59.178397649Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:36:59.178517 containerd[1593]: time="2025-05-27T17:36:59.178501554Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:36:59.178566 containerd[1593]: time="2025-05-27T17:36:59.178554984Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:36:59.178613 containerd[1593]: time="2025-05-27T17:36:59.178601642Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:36:59.178665 containerd[1593]: time="2025-05-27T17:36:59.178653289Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:36:59.178711 containerd[1593]: time="2025-05-27T17:36:59.178700207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:36:59.178756 containerd[1593]: time="2025-05-27T17:36:59.178745842Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:36:59.178809 containerd[1593]: time="2025-05-27T17:36:59.178799493Z" level=info msg="runtime interface created" May 27 17:36:59.178849 containerd[1593]: time="2025-05-27T17:36:59.178839418Z" level=info msg="created NRI interface" May 27 17:36:59.178891 containerd[1593]: time="2025-05-27T17:36:59.178880936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:36:59.178944 containerd[1593]: time="2025-05-27T17:36:59.178933584Z" level=info msg="Connect containerd service" May 27 17:36:59.179007 containerd[1593]: time="2025-05-27T17:36:59.178995430Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:36:59.180002 containerd[1593]: time="2025-05-27T17:36:59.179981900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:36:59.295724 containerd[1593]: time="2025-05-27T17:36:59.295661421Z" level=info msg="Start subscribing containerd event" May 27 17:36:59.295844 containerd[1593]: time="2025-05-27T17:36:59.295741601Z" level=info msg="Start recovering state" May 27 17:36:59.295917 containerd[1593]: time="2025-05-27T17:36:59.295890580Z" level=info msg="Start event monitor" May 27 17:36:59.295942 containerd[1593]: time="2025-05-27T17:36:59.295921839Z" level=info msg="Start cni network conf syncer for default" May 27 17:36:59.295942 containerd[1593]: time="2025-05-27T17:36:59.295933451Z" level=info msg="Start streaming server" May 27 17:36:59.295996 containerd[1593]: time="2025-05-27T17:36:59.295945794Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:36:59.295996 containerd[1593]: time="2025-05-27T17:36:59.295956755Z" level=info msg="runtime interface starting up..." May 27 17:36:59.295996 containerd[1593]: time="2025-05-27T17:36:59.295965461Z" level=info msg="starting plugins..." May 27 17:36:59.296053 containerd[1593]: time="2025-05-27T17:36:59.295996219Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:36:59.296543 containerd[1593]: time="2025-05-27T17:36:59.296509531Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:36:59.296615 containerd[1593]: time="2025-05-27T17:36:59.296590413Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:36:59.296702 containerd[1593]: time="2025-05-27T17:36:59.296679179Z" level=info msg="containerd successfully booted in 0.148189s" May 27 17:36:59.296906 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:36:59.352006 sshd_keygen[1591]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:36:59.377915 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:36:59.381409 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:36:59.403328 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:36:59.403681 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:36:59.407006 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:36:59.434809 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:36:59.437937 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:36:59.440523 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:36:59.442045 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:36:59.464741 tar[1592]: linux-amd64/README.md May 27 17:36:59.512397 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:36:59.726266 systemd-networkd[1491]: eth0: Gained IPv6LL May 27 17:36:59.730333 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:36:59.732295 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:36:59.734997 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 17:36:59.737928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:36:59.740389 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:36:59.780226 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 17:36:59.780533 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 17:36:59.782195 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:36:59.782703 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:37:00.968035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:37:00.969707 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:37:00.971105 systemd[1]: Startup finished in 3.843s (kernel) + 6.207s (initrd) + 4.506s (userspace) = 14.556s. May 27 17:37:00.973809 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:37:02.038889 kubelet[1694]: E0527 17:37:02.038813 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:37:02.043832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:37:02.044167 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:37:02.044694 systemd[1]: kubelet.service: Consumed 2.090s CPU time, 267.3M memory peak. May 27 17:37:03.717298 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:37:03.718978 systemd[1]: Started sshd@0-10.0.0.25:22-10.0.0.1:42228.service - OpenSSH per-connection server daemon (10.0.0.1:42228). May 27 17:37:03.788093 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 42228 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:37:03.789981 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:37:03.797249 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:37:03.798494 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:37:03.805356 systemd-logind[1577]: New session 1 of user core. May 27 17:37:03.824245 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:37:03.827422 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:37:03.848906 (systemd)[1712]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:37:03.851880 systemd-logind[1577]: New session c1 of user core. May 27 17:37:04.016133 systemd[1712]: Queued start job for default target default.target. May 27 17:37:04.040626 systemd[1712]: Created slice app.slice - User Application Slice. May 27 17:37:04.040654 systemd[1712]: Reached target paths.target - Paths. May 27 17:37:04.040712 systemd[1712]: Reached target timers.target - Timers. May 27 17:37:04.042625 systemd[1712]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:37:04.056958 systemd[1712]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:37:04.057172 systemd[1712]: Reached target sockets.target - Sockets. May 27 17:37:04.057222 systemd[1712]: Reached target basic.target - Basic System. May 27 17:37:04.057286 systemd[1712]: Reached target default.target - Main User Target. May 27 17:37:04.057337 systemd[1712]: Startup finished in 198ms. May 27 17:37:04.057812 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:37:04.059898 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:37:04.124375 systemd[1]: Started sshd@1-10.0.0.25:22-10.0.0.1:42230.service - OpenSSH per-connection server daemon (10.0.0.1:42230). May 27 17:37:04.177885 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 42230 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:37:04.180002 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:37:04.185566 systemd-logind[1577]: New session 2 of user core. May 27 17:37:04.196346 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:37:04.251980 sshd[1725]: Connection closed by 10.0.0.1 port 42230 May 27 17:37:04.252308 sshd-session[1723]: pam_unix(sshd:session): session closed for user core May 27 17:37:04.266041 systemd[1]: sshd@1-10.0.0.25:22-10.0.0.1:42230.service: Deactivated successfully. May 27 17:37:04.268408 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:37:04.269361 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. May 27 17:37:04.273280 systemd[1]: Started sshd@2-10.0.0.25:22-10.0.0.1:42242.service - OpenSSH per-connection server daemon (10.0.0.1:42242). May 27 17:37:04.274130 systemd-logind[1577]: Removed session 2. May 27 17:37:04.330778 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 42242 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:37:04.332768 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:37:04.338633 systemd-logind[1577]: New session 3 of user core. May 27 17:37:04.346423 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:37:04.397163 sshd[1733]: Connection closed by 10.0.0.1 port 42242 May 27 17:37:04.397696 sshd-session[1731]: pam_unix(sshd:session): session closed for user core May 27 17:37:04.411178 systemd[1]: sshd@2-10.0.0.25:22-10.0.0.1:42242.service: Deactivated successfully. May 27 17:37:04.413001 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:37:04.413921 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. May 27 17:37:04.417088 systemd[1]: Started sshd@3-10.0.0.25:22-10.0.0.1:42244.service - OpenSSH per-connection server daemon (10.0.0.1:42244). May 27 17:37:04.417805 systemd-logind[1577]: Removed session 3. May 27 17:37:04.465045 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 42244 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:37:04.466792 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:37:04.471895 systemd-logind[1577]: New session 4 of user core. May 27 17:37:04.481231 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:37:04.538532 sshd[1741]: Connection closed by 10.0.0.1 port 42244 May 27 17:37:04.538828 sshd-session[1739]: pam_unix(sshd:session): session closed for user core May 27 17:37:04.551554 systemd[1]: sshd@3-10.0.0.25:22-10.0.0.1:42244.service: Deactivated successfully. May 27 17:37:04.553129 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:37:04.553867 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. May 27 17:37:04.556847 systemd[1]: Started sshd@4-10.0.0.25:22-10.0.0.1:42252.service - OpenSSH per-connection server daemon (10.0.0.1:42252). May 27 17:37:04.557615 systemd-logind[1577]: Removed session 4. May 27 17:37:04.618372 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 42252 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:37:04.620051 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:37:04.624483 systemd-logind[1577]: New session 5 of user core. May 27 17:37:04.634231 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:37:04.694391 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:37:04.694793 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:37:04.716906 sudo[1750]: pam_unix(sudo:session): session closed for user root May 27 17:37:04.718737 sshd[1749]: Connection closed by 10.0.0.1 port 42252 May 27 17:37:04.719485 sshd-session[1747]: pam_unix(sshd:session): session closed for user core May 27 17:37:04.741361 systemd[1]: sshd@4-10.0.0.25:22-10.0.0.1:42252.service: Deactivated successfully. May 27 17:37:04.743278 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:37:04.744285 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. May 27 17:37:04.747417 systemd[1]: Started sshd@5-10.0.0.25:22-10.0.0.1:42266.service - OpenSSH per-connection server daemon (10.0.0.1:42266). May 27 17:37:04.748128 systemd-logind[1577]: Removed session 5. May 27 17:37:04.800479 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 42266 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:37:04.802288 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:37:04.807197 systemd-logind[1577]: New session 6 of user core. May 27 17:37:04.818201 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:37:04.875668 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:37:04.876065 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:37:05.370788 sudo[1760]: pam_unix(sudo:session): session closed for user root May 27 17:37:05.378129 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:37:05.378439 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:37:05.390421 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:37:05.450397 augenrules[1782]: No rules May 27 17:37:05.452769 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:37:05.453083 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:37:05.454390 sudo[1759]: pam_unix(sudo:session): session closed for user root May 27 17:37:05.456149 sshd[1758]: Connection closed by 10.0.0.1 port 42266 May 27 17:37:05.456499 sshd-session[1756]: pam_unix(sshd:session): session closed for user core May 27 17:37:05.470366 systemd[1]: sshd@5-10.0.0.25:22-10.0.0.1:42266.service: Deactivated successfully. May 27 17:37:05.472332 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:37:05.473050 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. May 27 17:37:05.476263 systemd[1]: Started sshd@6-10.0.0.25:22-10.0.0.1:42274.service - OpenSSH per-connection server daemon (10.0.0.1:42274). May 27 17:37:05.476767 systemd-logind[1577]: Removed session 6. May 27 17:37:05.527621 sshd[1791]: Accepted publickey for core from 10.0.0.1 port 42274 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:37:05.529278 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:37:05.534022 systemd-logind[1577]: New session 7 of user core. May 27 17:37:05.549301 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:37:05.603319 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:37:05.603620 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:37:05.937172 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:37:05.959600 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:37:06.196271 dockerd[1814]: time="2025-05-27T17:37:06.196104304Z" level=info msg="Starting up" May 27 17:37:06.197712 dockerd[1814]: time="2025-05-27T17:37:06.197659741Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:37:06.339486 dockerd[1814]: time="2025-05-27T17:37:06.339417117Z" level=info msg="Loading containers: start." May 27 17:37:06.351099 kernel: Initializing XFRM netlink socket May 27 17:37:06.614892 systemd-networkd[1491]: docker0: Link UP May 27 17:37:06.621058 dockerd[1814]: time="2025-05-27T17:37:06.621004183Z" level=info msg="Loading containers: done." May 27 17:37:06.637762 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1320602819-merged.mount: Deactivated successfully. May 27 17:37:06.638772 dockerd[1814]: time="2025-05-27T17:37:06.638714787Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:37:06.638886 dockerd[1814]: time="2025-05-27T17:37:06.638813642Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:37:06.639007 dockerd[1814]: time="2025-05-27T17:37:06.638983541Z" level=info msg="Initializing buildkit" May 27 17:37:06.671689 dockerd[1814]: time="2025-05-27T17:37:06.671636317Z" level=info msg="Completed buildkit initialization" May 27 17:37:06.677510 dockerd[1814]: time="2025-05-27T17:37:06.677454989Z" level=info msg="Daemon has completed initialization" May 27 17:37:06.677676 dockerd[1814]: time="2025-05-27T17:37:06.677528947Z" level=info msg="API listen on /run/docker.sock" May 27 17:37:06.677770 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:37:07.422260 containerd[1593]: time="2025-05-27T17:37:07.422214529Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 17:37:08.471965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719390900.mount: Deactivated successfully. May 27 17:37:10.035879 containerd[1593]: time="2025-05-27T17:37:10.035792691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:10.072329 containerd[1593]: time="2025-05-27T17:37:10.072291008Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 27 17:37:10.094774 containerd[1593]: time="2025-05-27T17:37:10.094736702Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:10.112006 containerd[1593]: time="2025-05-27T17:37:10.111963478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:10.112989 containerd[1593]: time="2025-05-27T17:37:10.112960788Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 2.690703409s" May 27 17:37:10.113041 containerd[1593]: time="2025-05-27T17:37:10.112992367Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 17:37:10.113563 containerd[1593]: time="2025-05-27T17:37:10.113534614Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 17:37:11.629266 containerd[1593]: time="2025-05-27T17:37:11.629201511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:11.630060 containerd[1593]: time="2025-05-27T17:37:11.630029654Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 27 17:37:11.631330 containerd[1593]: time="2025-05-27T17:37:11.631279007Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:11.634102 containerd[1593]: time="2025-05-27T17:37:11.634050174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:11.634950 containerd[1593]: time="2025-05-27T17:37:11.634912922Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.521343613s" May 27 17:37:11.634996 containerd[1593]: time="2025-05-27T17:37:11.634948609Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 17:37:11.635611 containerd[1593]: time="2025-05-27T17:37:11.635500834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 17:37:12.294795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:37:12.298546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:37:13.004472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:37:13.031471 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:37:13.090316 kubelet[2091]: E0527 17:37:13.090257 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:37:13.097917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:37:13.098195 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:37:13.098710 systemd[1]: kubelet.service: Consumed 370ms CPU time, 111.2M memory peak. May 27 17:37:13.724901 containerd[1593]: time="2025-05-27T17:37:13.724837466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:13.725606 containerd[1593]: time="2025-05-27T17:37:13.725556364Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 27 17:37:13.726878 containerd[1593]: time="2025-05-27T17:37:13.726831826Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:13.729235 containerd[1593]: time="2025-05-27T17:37:13.729198044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:13.730161 containerd[1593]: time="2025-05-27T17:37:13.730106458Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 2.094573142s" May 27 17:37:13.730161 containerd[1593]: time="2025-05-27T17:37:13.730141974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 17:37:13.730895 containerd[1593]: time="2025-05-27T17:37:13.730861083Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 17:37:14.996629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632678681.mount: Deactivated successfully. May 27 17:37:15.867315 containerd[1593]: time="2025-05-27T17:37:15.867234135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:15.868137 containerd[1593]: time="2025-05-27T17:37:15.868068399Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 17:37:15.869354 containerd[1593]: time="2025-05-27T17:37:15.869316229Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:15.871433 containerd[1593]: time="2025-05-27T17:37:15.871390319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:15.872013 containerd[1593]: time="2025-05-27T17:37:15.871982610Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 2.141092973s" May 27 17:37:15.872013 containerd[1593]: time="2025-05-27T17:37:15.872012927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 17:37:15.872556 containerd[1593]: time="2025-05-27T17:37:15.872525298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 17:37:16.969613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545163870.mount: Deactivated successfully. May 27 17:37:17.779475 containerd[1593]: time="2025-05-27T17:37:17.779411861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:17.780229 containerd[1593]: time="2025-05-27T17:37:17.780186213Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 27 17:37:17.781345 containerd[1593]: time="2025-05-27T17:37:17.781308748Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:17.785696 containerd[1593]: time="2025-05-27T17:37:17.785637095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:17.786726 containerd[1593]: time="2025-05-27T17:37:17.786691362Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.914129626s" May 27 17:37:17.786726 containerd[1593]: time="2025-05-27T17:37:17.786725336Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 17:37:17.787239 containerd[1593]: time="2025-05-27T17:37:17.787203032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:37:18.758600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3860650966.mount: Deactivated successfully. May 27 17:37:18.770590 containerd[1593]: time="2025-05-27T17:37:18.770504182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:37:18.771346 containerd[1593]: time="2025-05-27T17:37:18.771287682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 17:37:18.772603 containerd[1593]: time="2025-05-27T17:37:18.772545020Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:37:18.775192 containerd[1593]: time="2025-05-27T17:37:18.775129076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:37:18.775785 containerd[1593]: time="2025-05-27T17:37:18.775744380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 988.500842ms" May 27 17:37:18.775785 containerd[1593]: time="2025-05-27T17:37:18.775784645Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 17:37:18.776296 containerd[1593]: time="2025-05-27T17:37:18.776273171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 17:37:22.661854 containerd[1593]: time="2025-05-27T17:37:22.661751350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:22.693536 containerd[1593]: time="2025-05-27T17:37:22.693473190Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 27 17:37:22.734958 containerd[1593]: time="2025-05-27T17:37:22.734874682Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:22.824442 containerd[1593]: time="2025-05-27T17:37:22.824361292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:22.825869 containerd[1593]: time="2025-05-27T17:37:22.825789350Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.049400131s" May 27 17:37:22.825869 containerd[1593]: time="2025-05-27T17:37:22.825854863Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 17:37:23.206038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:37:23.207811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:37:23.429344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:37:23.440531 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:37:23.506871 kubelet[2210]: E0527 17:37:23.506674 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:37:23.512421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:37:23.512690 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:37:23.513187 systemd[1]: kubelet.service: Consumed 237ms CPU time, 109.5M memory peak. May 27 17:37:26.192461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:37:26.192717 systemd[1]: kubelet.service: Consumed 237ms CPU time, 109.5M memory peak. May 27 17:37:26.195644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:37:26.221877 systemd[1]: Reload requested from client PID 2226 ('systemctl') (unit session-7.scope)... May 27 17:37:26.221900 systemd[1]: Reloading... May 27 17:37:26.342111 zram_generator::config[2272]: No configuration found. May 27 17:37:26.797609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:37:26.925675 systemd[1]: Reloading finished in 703 ms. May 27 17:37:27.002693 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:37:27.002792 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:37:27.003161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:37:27.003210 systemd[1]: kubelet.service: Consumed 180ms CPU time, 98.3M memory peak. May 27 17:37:27.004947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:37:27.178830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:37:27.192412 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:37:27.229929 kubelet[2317]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:37:27.229929 kubelet[2317]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:37:27.229929 kubelet[2317]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:37:27.230420 kubelet[2317]: I0527 17:37:27.229958 2317 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:37:28.357410 kubelet[2317]: I0527 17:37:28.357339 2317 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:37:28.357410 kubelet[2317]: I0527 17:37:28.357383 2317 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:37:28.358218 kubelet[2317]: I0527 17:37:28.357711 2317 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:37:28.438453 kubelet[2317]: I0527 17:37:28.438355 2317 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:37:28.461207 kubelet[2317]: E0527 17:37:28.461171 2317 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 17:37:28.468020 kubelet[2317]: I0527 17:37:28.467989 2317 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:37:28.474977 kubelet[2317]: I0527 17:37:28.474959 2317 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:37:28.475239 kubelet[2317]: I0527 17:37:28.475203 2317 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:37:28.475406 kubelet[2317]: I0527 17:37:28.475233 2317 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:37:28.475557 kubelet[2317]: I0527 17:37:28.475410 2317 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:37:28.475557 kubelet[2317]: I0527 17:37:28.475425 2317 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:37:28.476465 kubelet[2317]: I0527 17:37:28.476443 2317 state_mem.go:36] "Initialized new in-memory state store" May 27 17:37:28.478871 kubelet[2317]: I0527 17:37:28.478835 2317 kubelet.go:480] "Attempting to sync node with API server" May 27 17:37:28.478871 kubelet[2317]: I0527 17:37:28.478873 2317 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:37:28.478949 kubelet[2317]: I0527 17:37:28.478902 2317 kubelet.go:386] "Adding apiserver pod source" May 27 17:37:28.478949 kubelet[2317]: I0527 17:37:28.478917 2317 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:37:28.484022 kubelet[2317]: I0527 17:37:28.483934 2317 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:37:28.484663 kubelet[2317]: I0527 17:37:28.484624 2317 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:37:28.485671 kubelet[2317]: W0527 17:37:28.485631 2317 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:37:28.487203 kubelet[2317]: E0527 17:37:28.487166 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 17:37:28.487633 kubelet[2317]: E0527 17:37:28.487315 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:37:28.488959 kubelet[2317]: I0527 17:37:28.488932 2317 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:37:28.489010 kubelet[2317]: I0527 17:37:28.488993 2317 server.go:1289] "Started kubelet" May 27 17:37:28.538960 kubelet[2317]: I0527 17:37:28.538886 2317 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:37:28.539777 kubelet[2317]: I0527 17:37:28.539647 2317 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:37:28.539777 kubelet[2317]: I0527 17:37:28.539648 2317 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:37:28.540756 kubelet[2317]: I0527 17:37:28.540740 2317 server.go:317] "Adding debug handlers to kubelet server" May 27 17:37:28.541490 kubelet[2317]: I0527 17:37:28.541462 2317 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:37:28.541701 kubelet[2317]: E0527 17:37:28.541673 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:37:28.541738 kubelet[2317]: I0527 17:37:28.541715 2317 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:37:28.541738 kubelet[2317]: I0527 17:37:28.541719 2317 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:37:28.542210 kubelet[2317]: I0527 17:37:28.542182 2317 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:37:28.542339 kubelet[2317]: I0527 17:37:28.542316 2317 reconciler.go:26] "Reconciler: start to sync state" May 27 17:37:28.542937 kubelet[2317]: E0527 17:37:28.542796 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:37:28.561309 kubelet[2317]: E0527 17:37:28.561275 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="200ms" May 27 17:37:28.563493 kubelet[2317]: E0527 17:37:28.563444 2317 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:37:28.563934 kubelet[2317]: I0527 17:37:28.563899 2317 factory.go:223] Registration of the systemd container factory successfully May 27 17:37:28.564135 kubelet[2317]: I0527 17:37:28.564112 2317 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:37:28.566606 kubelet[2317]: I0527 17:37:28.566561 2317 factory.go:223] Registration of the containerd container factory successfully May 27 17:37:28.573407 kubelet[2317]: E0527 17:37:28.572163 2317 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184372ee600a5a4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 17:37:28.488958543 +0000 UTC m=+1.292305865,LastTimestamp:2025-05-27 17:37:28.488958543 +0000 UTC m=+1.292305865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 17:37:28.586038 kubelet[2317]: I0527 17:37:28.585879 2317 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:37:28.586038 kubelet[2317]: I0527 17:37:28.585898 2317 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:37:28.586038 kubelet[2317]: I0527 17:37:28.585913 2317 state_mem.go:36] "Initialized new in-memory state store" May 27 17:37:28.587584 kubelet[2317]: I0527 17:37:28.587549 2317 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:37:28.589273 kubelet[2317]: I0527 17:37:28.589249 2317 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:37:28.589323 kubelet[2317]: I0527 17:37:28.589279 2317 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:37:28.589323 kubelet[2317]: I0527 17:37:28.589302 2317 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:37:28.589323 kubelet[2317]: I0527 17:37:28.589315 2317 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:37:28.589385 kubelet[2317]: E0527 17:37:28.589358 2317 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:37:28.594914 kubelet[2317]: E0527 17:37:28.594878 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:37:28.595021 kubelet[2317]: I0527 17:37:28.595006 2317 policy_none.go:49] "None policy: Start" May 27 17:37:28.595057 kubelet[2317]: I0527 17:37:28.595023 2317 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:37:28.595057 kubelet[2317]: I0527 17:37:28.595035 2317 state_mem.go:35] "Initializing new in-memory state store" May 27 17:37:28.601446 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:37:28.615726 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:37:28.619598 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:37:28.639975 kubelet[2317]: E0527 17:37:28.639906 2317 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:37:28.640150 kubelet[2317]: I0527 17:37:28.640133 2317 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:37:28.640194 kubelet[2317]: I0527 17:37:28.640149 2317 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:37:28.640360 kubelet[2317]: I0527 17:37:28.640336 2317 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:37:28.641350 kubelet[2317]: E0527 17:37:28.641326 2317 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:37:28.641408 kubelet[2317]: E0527 17:37:28.641371 2317 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 17:37:28.701005 systemd[1]: Created slice kubepods-burstable-pod9d54e2471a3c8c65465e426dd9a05f22.slice - libcontainer container kubepods-burstable-pod9d54e2471a3c8c65465e426dd9a05f22.slice. May 27 17:37:28.711980 kubelet[2317]: E0527 17:37:28.711935 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:37:28.714503 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 27 17:37:28.730350 kubelet[2317]: E0527 17:37:28.730324 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:37:28.733201 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 27 17:37:28.735717 kubelet[2317]: E0527 17:37:28.735698 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:37:28.741779 kubelet[2317]: I0527 17:37:28.741759 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:37:28.742133 kubelet[2317]: E0527 17:37:28.742097 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 27 17:37:28.761639 kubelet[2317]: E0527 17:37:28.761615 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="400ms" May 27 17:37:28.844023 kubelet[2317]: I0527 17:37:28.843980 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d54e2471a3c8c65465e426dd9a05f22-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d54e2471a3c8c65465e426dd9a05f22\") " pod="kube-system/kube-apiserver-localhost" May 27 17:37:28.844023 kubelet[2317]: I0527 17:37:28.844021 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d54e2471a3c8c65465e426dd9a05f22-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d54e2471a3c8c65465e426dd9a05f22\") " pod="kube-system/kube-apiserver-localhost" May 27 17:37:28.844023 kubelet[2317]: I0527 17:37:28.844041 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:37:28.844260 kubelet[2317]: I0527 17:37:28.844057 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:37:28.844260 kubelet[2317]: I0527 17:37:28.844120 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d54e2471a3c8c65465e426dd9a05f22-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d54e2471a3c8c65465e426dd9a05f22\") " pod="kube-system/kube-apiserver-localhost" May 27 17:37:28.844260 kubelet[2317]: I0527 17:37:28.844174 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:37:28.844260 kubelet[2317]: I0527 17:37:28.844210 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:37:28.844260 kubelet[2317]: I0527 17:37:28.844229 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:37:28.844404 kubelet[2317]: I0527 17:37:28.844254 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 17:37:28.944091 kubelet[2317]: I0527 17:37:28.943944 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:37:28.944403 kubelet[2317]: E0527 17:37:28.944364 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 27 17:37:29.013801 containerd[1593]: time="2025-05-27T17:37:29.013725138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d54e2471a3c8c65465e426dd9a05f22,Namespace:kube-system,Attempt:0,}" May 27 17:37:29.031651 containerd[1593]: time="2025-05-27T17:37:29.031608205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 27 17:37:29.036921 containerd[1593]: time="2025-05-27T17:37:29.036842782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 27 17:37:29.045961 containerd[1593]: time="2025-05-27T17:37:29.045925255Z" level=info msg="connecting to shim 8daa8c802f066ae78b8bda01d26d58422ead51b0d05105666ef1ed912e86a126" address="unix:///run/containerd/s/a17282f63a82437aa94dbe2bc720634eef402615fdbcf415718ae48ccb320923" namespace=k8s.io protocol=ttrpc version=3 May 27 17:37:29.078564 containerd[1593]: time="2025-05-27T17:37:29.078521915Z" level=info msg="connecting to shim 470600a215bb6ab05c66c9d33705c31f3ccdbf4fd8224e49714c4d54bdd78c31" address="unix:///run/containerd/s/3398329aec056b218578cc38e7cb287b76b6976813ec6b4d55f071951300c467" namespace=k8s.io protocol=ttrpc version=3 May 27 17:37:29.099368 systemd[1]: Started cri-containerd-8daa8c802f066ae78b8bda01d26d58422ead51b0d05105666ef1ed912e86a126.scope - libcontainer container 8daa8c802f066ae78b8bda01d26d58422ead51b0d05105666ef1ed912e86a126. May 27 17:37:29.099863 containerd[1593]: time="2025-05-27T17:37:29.099822341Z" level=info msg="connecting to shim d3cdee3139edc48526adba0f1570126196c5c15a9251f53683e58b66f7bdf7bb" address="unix:///run/containerd/s/040e96f76bc6bb51642de0d42f6cc22bf96dc4da7647aa4d63110c85d75482a2" namespace=k8s.io protocol=ttrpc version=3 May 27 17:37:29.119203 systemd[1]: Started cri-containerd-470600a215bb6ab05c66c9d33705c31f3ccdbf4fd8224e49714c4d54bdd78c31.scope - libcontainer container 470600a215bb6ab05c66c9d33705c31f3ccdbf4fd8224e49714c4d54bdd78c31. May 27 17:37:29.152321 systemd[1]: Started cri-containerd-d3cdee3139edc48526adba0f1570126196c5c15a9251f53683e58b66f7bdf7bb.scope - libcontainer container d3cdee3139edc48526adba0f1570126196c5c15a9251f53683e58b66f7bdf7bb. May 27 17:37:29.162359 kubelet[2317]: E0527 17:37:29.162320 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="800ms" May 27 17:37:29.208267 containerd[1593]: time="2025-05-27T17:37:29.208024479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"470600a215bb6ab05c66c9d33705c31f3ccdbf4fd8224e49714c4d54bdd78c31\"" May 27 17:37:29.218670 containerd[1593]: time="2025-05-27T17:37:29.218620750Z" level=info msg="CreateContainer within sandbox \"470600a215bb6ab05c66c9d33705c31f3ccdbf4fd8224e49714c4d54bdd78c31\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:37:29.234107 containerd[1593]: time="2025-05-27T17:37:29.233708425Z" level=info msg="Container b485d6e3247ed51f686c9b80c2193c47bd3d3381726f88e077acccb8d4936365: CDI devices from CRI Config.CDIDevices: []" May 27 17:37:29.235276 containerd[1593]: time="2025-05-27T17:37:29.235248133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d54e2471a3c8c65465e426dd9a05f22,Namespace:kube-system,Attempt:0,} returns sandbox id \"8daa8c802f066ae78b8bda01d26d58422ead51b0d05105666ef1ed912e86a126\"" May 27 17:37:29.241967 containerd[1593]: time="2025-05-27T17:37:29.241909615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3cdee3139edc48526adba0f1570126196c5c15a9251f53683e58b66f7bdf7bb\"" May 27 17:37:29.242190 containerd[1593]: time="2025-05-27T17:37:29.242040481Z" level=info msg="CreateContainer within sandbox \"8daa8c802f066ae78b8bda01d26d58422ead51b0d05105666ef1ed912e86a126\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:37:29.243938 containerd[1593]: time="2025-05-27T17:37:29.243894738Z" level=info msg="CreateContainer within sandbox \"470600a215bb6ab05c66c9d33705c31f3ccdbf4fd8224e49714c4d54bdd78c31\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b485d6e3247ed51f686c9b80c2193c47bd3d3381726f88e077acccb8d4936365\"" May 27 17:37:29.244468 containerd[1593]: time="2025-05-27T17:37:29.244434150Z" level=info msg="StartContainer for \"b485d6e3247ed51f686c9b80c2193c47bd3d3381726f88e077acccb8d4936365\"" May 27 17:37:29.246639 containerd[1593]: time="2025-05-27T17:37:29.246609249Z" level=info msg="connecting to shim b485d6e3247ed51f686c9b80c2193c47bd3d3381726f88e077acccb8d4936365" address="unix:///run/containerd/s/3398329aec056b218578cc38e7cb287b76b6976813ec6b4d55f071951300c467" protocol=ttrpc version=3 May 27 17:37:29.247809 containerd[1593]: time="2025-05-27T17:37:29.247782048Z" level=info msg="CreateContainer within sandbox \"d3cdee3139edc48526adba0f1570126196c5c15a9251f53683e58b66f7bdf7bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:37:29.257754 containerd[1593]: time="2025-05-27T17:37:29.257717270Z" level=info msg="Container dae963baf88253b260bab0ac6067309cef4f7812e78f190565e65929a734da90: CDI devices from CRI Config.CDIDevices: []" May 27 17:37:29.261475 containerd[1593]: time="2025-05-27T17:37:29.261430083Z" level=info msg="Container 3667df577c8363fa2d35c61d4c127a181bd4bcbb59ef3a8c302c6691a6f74dda: CDI devices from CRI Config.CDIDevices: []" May 27 17:37:29.275608 containerd[1593]: time="2025-05-27T17:37:29.275515619Z" level=info msg="CreateContainer within sandbox \"8daa8c802f066ae78b8bda01d26d58422ead51b0d05105666ef1ed912e86a126\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dae963baf88253b260bab0ac6067309cef4f7812e78f190565e65929a734da90\"" May 27 17:37:29.276944 containerd[1593]: time="2025-05-27T17:37:29.276837658Z" level=info msg="StartContainer for \"dae963baf88253b260bab0ac6067309cef4f7812e78f190565e65929a734da90\"" May 27 17:37:29.277062 containerd[1593]: time="2025-05-27T17:37:29.277029388Z" level=info msg="CreateContainer within sandbox \"d3cdee3139edc48526adba0f1570126196c5c15a9251f53683e58b66f7bdf7bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3667df577c8363fa2d35c61d4c127a181bd4bcbb59ef3a8c302c6691a6f74dda\"" May 27 17:37:29.277346 systemd[1]: Started cri-containerd-b485d6e3247ed51f686c9b80c2193c47bd3d3381726f88e077acccb8d4936365.scope - libcontainer container b485d6e3247ed51f686c9b80c2193c47bd3d3381726f88e077acccb8d4936365. May 27 17:37:29.277895 containerd[1593]: time="2025-05-27T17:37:29.277846330Z" level=info msg="StartContainer for \"3667df577c8363fa2d35c61d4c127a181bd4bcbb59ef3a8c302c6691a6f74dda\"" May 27 17:37:29.278891 containerd[1593]: time="2025-05-27T17:37:29.278821799Z" level=info msg="connecting to shim dae963baf88253b260bab0ac6067309cef4f7812e78f190565e65929a734da90" address="unix:///run/containerd/s/a17282f63a82437aa94dbe2bc720634eef402615fdbcf415718ae48ccb320923" protocol=ttrpc version=3 May 27 17:37:29.279785 containerd[1593]: time="2025-05-27T17:37:29.279695898Z" level=info msg="connecting to shim 3667df577c8363fa2d35c61d4c127a181bd4bcbb59ef3a8c302c6691a6f74dda" address="unix:///run/containerd/s/040e96f76bc6bb51642de0d42f6cc22bf96dc4da7647aa4d63110c85d75482a2" protocol=ttrpc version=3 May 27 17:37:29.312229 systemd[1]: Started cri-containerd-3667df577c8363fa2d35c61d4c127a181bd4bcbb59ef3a8c302c6691a6f74dda.scope - libcontainer container 3667df577c8363fa2d35c61d4c127a181bd4bcbb59ef3a8c302c6691a6f74dda. May 27 17:37:29.315550 systemd[1]: Started cri-containerd-dae963baf88253b260bab0ac6067309cef4f7812e78f190565e65929a734da90.scope - libcontainer container dae963baf88253b260bab0ac6067309cef4f7812e78f190565e65929a734da90. May 27 17:37:29.347315 kubelet[2317]: I0527 17:37:29.347283 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:37:29.347931 kubelet[2317]: E0527 17:37:29.347905 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 27 17:37:29.351368 containerd[1593]: time="2025-05-27T17:37:29.351332723Z" level=info msg="StartContainer for \"b485d6e3247ed51f686c9b80c2193c47bd3d3381726f88e077acccb8d4936365\" returns successfully" May 27 17:37:29.384596 containerd[1593]: time="2025-05-27T17:37:29.384537093Z" level=info msg="StartContainer for \"dae963baf88253b260bab0ac6067309cef4f7812e78f190565e65929a734da90\" returns successfully" May 27 17:37:29.394213 containerd[1593]: time="2025-05-27T17:37:29.394159819Z" level=info msg="StartContainer for \"3667df577c8363fa2d35c61d4c127a181bd4bcbb59ef3a8c302c6691a6f74dda\" returns successfully" May 27 17:37:29.599038 kubelet[2317]: E0527 17:37:29.598852 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:37:29.602846 kubelet[2317]: E0527 17:37:29.602514 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:37:29.609447 kubelet[2317]: E0527 17:37:29.609412 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:37:30.150730 kubelet[2317]: I0527 17:37:30.150685 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:37:30.608868 kubelet[2317]: E0527 17:37:30.608831 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:37:30.609467 kubelet[2317]: E0527 17:37:30.609150 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:37:31.092490 kubelet[2317]: E0527 17:37:31.092428 2317 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 17:37:31.319110 kubelet[2317]: I0527 17:37:31.318687 2317 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:37:31.360288 kubelet[2317]: I0527 17:37:31.360087 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:37:31.474935 kubelet[2317]: E0527 17:37:31.474885 2317 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 17:37:31.474935 kubelet[2317]: I0527 17:37:31.474921 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:37:31.477223 kubelet[2317]: E0527 17:37:31.477183 2317 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 17:37:31.477223 kubelet[2317]: I0527 17:37:31.477202 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:37:31.478744 kubelet[2317]: E0527 17:37:31.478711 2317 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 17:37:31.486173 kubelet[2317]: I0527 17:37:31.486133 2317 apiserver.go:52] "Watching apiserver" May 27 17:37:31.543046 kubelet[2317]: I0527 17:37:31.542959 2317 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:37:33.168930 systemd[1]: Reload requested from client PID 2598 ('systemctl') (unit session-7.scope)... May 27 17:37:33.168948 systemd[1]: Reloading... May 27 17:37:33.251143 zram_generator::config[2641]: No configuration found. May 27 17:37:33.358037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:37:33.491621 systemd[1]: Reloading finished in 322 ms. May 27 17:37:33.523181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:37:33.549055 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:37:33.549437 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:37:33.549523 systemd[1]: kubelet.service: Consumed 1.341s CPU time, 132M memory peak. May 27 17:37:33.551749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:37:33.786507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:37:33.803691 (kubelet)[2686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:37:33.856834 kubelet[2686]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:37:33.856834 kubelet[2686]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:37:33.856834 kubelet[2686]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:37:33.857436 kubelet[2686]: I0527 17:37:33.856886 2686 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:37:33.865162 kubelet[2686]: I0527 17:37:33.865125 2686 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:37:33.865162 kubelet[2686]: I0527 17:37:33.865150 2686 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:37:33.865355 kubelet[2686]: I0527 17:37:33.865335 2686 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:37:33.866483 kubelet[2686]: I0527 17:37:33.866458 2686 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 17:37:33.868622 kubelet[2686]: I0527 17:37:33.868525 2686 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:37:33.871669 kubelet[2686]: I0527 17:37:33.871648 2686 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:37:33.877865 kubelet[2686]: I0527 17:37:33.877843 2686 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:37:33.878129 kubelet[2686]: I0527 17:37:33.878100 2686 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:37:33.878262 kubelet[2686]: I0527 17:37:33.878128 2686 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:37:33.878347 kubelet[2686]: I0527 17:37:33.878269 2686 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:37:33.878347 kubelet[2686]: I0527 17:37:33.878276 2686 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:37:33.878347 kubelet[2686]: I0527 17:37:33.878334 2686 state_mem.go:36] "Initialized new in-memory state store" May 27 17:37:33.878513 kubelet[2686]: I0527 17:37:33.878501 2686 kubelet.go:480] "Attempting to sync node with API server" May 27 17:37:33.878513 kubelet[2686]: I0527 17:37:33.878514 2686 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:37:33.878576 kubelet[2686]: I0527 17:37:33.878545 2686 kubelet.go:386] "Adding apiserver pod source" May 27 17:37:33.878576 kubelet[2686]: I0527 17:37:33.878560 2686 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:37:33.882872 kubelet[2686]: I0527 17:37:33.880434 2686 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:37:33.882872 kubelet[2686]: I0527 17:37:33.881185 2686 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:37:33.887104 kubelet[2686]: I0527 17:37:33.885728 2686 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:37:33.887104 kubelet[2686]: I0527 17:37:33.885781 2686 server.go:1289] "Started kubelet" May 27 17:37:33.890087 kubelet[2686]: I0527 17:37:33.889280 2686 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:37:33.891838 kubelet[2686]: I0527 17:37:33.891782 2686 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:37:33.892704 kubelet[2686]: I0527 17:37:33.892687 2686 server.go:317] "Adding debug handlers to kubelet server" May 27 17:37:33.897177 kubelet[2686]: I0527 17:37:33.897144 2686 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:37:33.897301 kubelet[2686]: I0527 17:37:33.897283 2686 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:37:33.897478 kubelet[2686]: I0527 17:37:33.897458 2686 reconciler.go:26] "Reconciler: start to sync state" May 27 17:37:33.897591 kubelet[2686]: I0527 17:37:33.897549 2686 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:37:33.897805 kubelet[2686]: I0527 17:37:33.897791 2686 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:37:33.898049 kubelet[2686]: I0527 17:37:33.898031 2686 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:37:33.898236 kubelet[2686]: E0527 17:37:33.898209 2686 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:37:33.901274 kubelet[2686]: I0527 17:37:33.901250 2686 factory.go:223] Registration of the containerd container factory successfully May 27 17:37:33.901274 kubelet[2686]: I0527 17:37:33.901268 2686 factory.go:223] Registration of the systemd container factory successfully May 27 17:37:33.901529 kubelet[2686]: I0527 17:37:33.901505 2686 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:37:33.906909 kubelet[2686]: I0527 17:37:33.906859 2686 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:37:33.908228 kubelet[2686]: I0527 17:37:33.908211 2686 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:37:33.908292 kubelet[2686]: I0527 17:37:33.908283 2686 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:37:33.908383 kubelet[2686]: I0527 17:37:33.908370 2686 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:37:33.908438 kubelet[2686]: I0527 17:37:33.908430 2686 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:37:33.908527 kubelet[2686]: E0527 17:37:33.908510 2686 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:37:33.941153 kubelet[2686]: I0527 17:37:33.941116 2686 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:37:33.941153 kubelet[2686]: I0527 17:37:33.941142 2686 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:37:33.941153 kubelet[2686]: I0527 17:37:33.941165 2686 state_mem.go:36] "Initialized new in-memory state store" May 27 17:37:33.941354 kubelet[2686]: I0527 17:37:33.941296 2686 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:37:33.941354 kubelet[2686]: I0527 17:37:33.941305 2686 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:37:33.941354 kubelet[2686]: I0527 17:37:33.941320 2686 policy_none.go:49] "None policy: Start" May 27 17:37:33.941354 kubelet[2686]: I0527 17:37:33.941329 2686 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:37:33.941354 kubelet[2686]: I0527 17:37:33.941338 2686 state_mem.go:35] "Initializing new in-memory state store" May 27 17:37:33.941468 kubelet[2686]: I0527 17:37:33.941430 2686 state_mem.go:75] "Updated machine memory state" May 27 17:37:33.946391 kubelet[2686]: E0527 17:37:33.946285 2686 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:37:33.946759 kubelet[2686]: I0527 17:37:33.946636 2686 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:37:33.946759 kubelet[2686]: I0527 17:37:33.946651 2686 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:37:33.947251 kubelet[2686]: I0527 17:37:33.947014 2686 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:37:33.948322 kubelet[2686]: E0527 17:37:33.948299 2686 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:37:34.009961 kubelet[2686]: I0527 17:37:34.009913 2686 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:37:34.010132 kubelet[2686]: I0527 17:37:34.010004 2686 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:37:34.010132 kubelet[2686]: I0527 17:37:34.010048 2686 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:37:34.051254 kubelet[2686]: I0527 17:37:34.051133 2686 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:37:34.098385 kubelet[2686]: I0527 17:37:34.098307 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d54e2471a3c8c65465e426dd9a05f22-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d54e2471a3c8c65465e426dd9a05f22\") " pod="kube-system/kube-apiserver-localhost" May 27 17:37:34.098385 kubelet[2686]: I0527 17:37:34.098363 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:37:34.098385 kubelet[2686]: I0527 17:37:34.098393 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:37:34.098620 kubelet[2686]: I0527 17:37:34.098456 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 17:37:34.098620 kubelet[2686]: I0527 17:37:34.098511 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d54e2471a3c8c65465e426dd9a05f22-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d54e2471a3c8c65465e426dd9a05f22\") " pod="kube-system/kube-apiserver-localhost" May 27 17:37:34.098620 kubelet[2686]: I0527 17:37:34.098535 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d54e2471a3c8c65465e426dd9a05f22-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d54e2471a3c8c65465e426dd9a05f22\") " pod="kube-system/kube-apiserver-localhost" May 27 17:37:34.098620 kubelet[2686]: I0527 17:37:34.098552 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:37:34.098620 kubelet[2686]: I0527 17:37:34.098585 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:37:34.098776 kubelet[2686]: I0527 17:37:34.098640 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:37:34.160829 kubelet[2686]: I0527 17:37:34.160764 2686 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 17:37:34.161000 kubelet[2686]: I0527 17:37:34.160877 2686 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:37:34.879539 kubelet[2686]: I0527 17:37:34.879476 2686 apiserver.go:52] "Watching apiserver" May 27 17:37:34.897992 kubelet[2686]: I0527 17:37:34.897918 2686 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:37:34.923379 kubelet[2686]: I0527 17:37:34.923341 2686 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:37:34.928347 kubelet[2686]: E0527 17:37:34.928309 2686 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 17:37:34.952741 kubelet[2686]: I0527 17:37:34.952589 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.95256847 podStartE2EDuration="952.56847ms" podCreationTimestamp="2025-05-27 17:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:37:34.94277655 +0000 UTC m=+1.134336025" watchObservedRunningTime="2025-05-27 17:37:34.95256847 +0000 UTC m=+1.144127946" May 27 17:37:34.953217 kubelet[2686]: I0527 17:37:34.953180 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.953170941 podStartE2EDuration="953.170941ms" podCreationTimestamp="2025-05-27 17:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:37:34.952880326 +0000 UTC m=+1.144439811" watchObservedRunningTime="2025-05-27 17:37:34.953170941 +0000 UTC m=+1.144730446" May 27 17:37:34.972039 kubelet[2686]: I0527 17:37:34.971971 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.971939273 podStartE2EDuration="971.939273ms" podCreationTimestamp="2025-05-27 17:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:37:34.962130792 +0000 UTC m=+1.153690297" watchObservedRunningTime="2025-05-27 17:37:34.971939273 +0000 UTC m=+1.163498738" May 27 17:37:39.274192 kubelet[2686]: I0527 17:37:39.274144 2686 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:37:39.274735 containerd[1593]: time="2025-05-27T17:37:39.274535132Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:37:39.275010 kubelet[2686]: I0527 17:37:39.274736 2686 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:37:39.988922 systemd[1]: Created slice kubepods-besteffort-pod6019e22c_858b_4b00_b451_8a0da99d457a.slice - libcontainer container kubepods-besteffort-pod6019e22c_858b_4b00_b451_8a0da99d457a.slice. May 27 17:37:40.035946 kubelet[2686]: I0527 17:37:40.035878 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6019e22c-858b-4b00-b451-8a0da99d457a-kube-proxy\") pod \"kube-proxy-b9ql8\" (UID: \"6019e22c-858b-4b00-b451-8a0da99d457a\") " pod="kube-system/kube-proxy-b9ql8" May 27 17:37:40.035946 kubelet[2686]: I0527 17:37:40.035919 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6019e22c-858b-4b00-b451-8a0da99d457a-xtables-lock\") pod \"kube-proxy-b9ql8\" (UID: \"6019e22c-858b-4b00-b451-8a0da99d457a\") " pod="kube-system/kube-proxy-b9ql8" May 27 17:37:40.035946 kubelet[2686]: I0527 17:37:40.035946 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6019e22c-858b-4b00-b451-8a0da99d457a-lib-modules\") pod \"kube-proxy-b9ql8\" (UID: \"6019e22c-858b-4b00-b451-8a0da99d457a\") " pod="kube-system/kube-proxy-b9ql8" May 27 17:37:40.036258 kubelet[2686]: I0527 17:37:40.035966 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7f46\" (UniqueName: \"kubernetes.io/projected/6019e22c-858b-4b00-b451-8a0da99d457a-kube-api-access-m7f46\") pod \"kube-proxy-b9ql8\" (UID: \"6019e22c-858b-4b00-b451-8a0da99d457a\") " pod="kube-system/kube-proxy-b9ql8" May 27 17:37:40.246115 systemd[1]: Created slice kubepods-besteffort-pod0b198d1e_d293_4b0a_8321_c9851f971924.slice - libcontainer container kubepods-besteffort-pod0b198d1e_d293_4b0a_8321_c9851f971924.slice. May 27 17:37:40.308446 containerd[1593]: time="2025-05-27T17:37:40.308395112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9ql8,Uid:6019e22c-858b-4b00-b451-8a0da99d457a,Namespace:kube-system,Attempt:0,}" May 27 17:37:40.332858 containerd[1593]: time="2025-05-27T17:37:40.332799365Z" level=info msg="connecting to shim 928043a72ac9c950d2f79da2f9cc5585ef89c8080105ee0146819248aae996a6" address="unix:///run/containerd/s/6b78ea22040bed2e9c759416915f3677a761063e88813616d6cf21916dbe3222" namespace=k8s.io protocol=ttrpc version=3 May 27 17:37:40.339060 kubelet[2686]: I0527 17:37:40.338871 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r4tn\" (UniqueName: \"kubernetes.io/projected/0b198d1e-d293-4b0a-8321-c9851f971924-kube-api-access-2r4tn\") pod \"tigera-operator-844669ff44-cdpr8\" (UID: \"0b198d1e-d293-4b0a-8321-c9851f971924\") " pod="tigera-operator/tigera-operator-844669ff44-cdpr8" May 27 17:37:40.339060 kubelet[2686]: I0527 17:37:40.338918 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0b198d1e-d293-4b0a-8321-c9851f971924-var-lib-calico\") pod \"tigera-operator-844669ff44-cdpr8\" (UID: \"0b198d1e-d293-4b0a-8321-c9851f971924\") " pod="tigera-operator/tigera-operator-844669ff44-cdpr8" May 27 17:37:40.365257 systemd[1]: Started cri-containerd-928043a72ac9c950d2f79da2f9cc5585ef89c8080105ee0146819248aae996a6.scope - libcontainer container 928043a72ac9c950d2f79da2f9cc5585ef89c8080105ee0146819248aae996a6. May 27 17:37:40.393153 containerd[1593]: time="2025-05-27T17:37:40.393044957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9ql8,Uid:6019e22c-858b-4b00-b451-8a0da99d457a,Namespace:kube-system,Attempt:0,} returns sandbox id \"928043a72ac9c950d2f79da2f9cc5585ef89c8080105ee0146819248aae996a6\"" May 27 17:37:40.399254 containerd[1593]: time="2025-05-27T17:37:40.399194407Z" level=info msg="CreateContainer within sandbox \"928043a72ac9c950d2f79da2f9cc5585ef89c8080105ee0146819248aae996a6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:37:40.412084 containerd[1593]: time="2025-05-27T17:37:40.412023835Z" level=info msg="Container bc4206ece2e83f65a6280a5cf7f560e15ced7e1f7dfcf239f20c0bf48255890b: CDI devices from CRI Config.CDIDevices: []" May 27 17:37:40.416312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939085142.mount: Deactivated successfully. May 27 17:37:40.421445 containerd[1593]: time="2025-05-27T17:37:40.421382560Z" level=info msg="CreateContainer within sandbox \"928043a72ac9c950d2f79da2f9cc5585ef89c8080105ee0146819248aae996a6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc4206ece2e83f65a6280a5cf7f560e15ced7e1f7dfcf239f20c0bf48255890b\"" May 27 17:37:40.421988 containerd[1593]: time="2025-05-27T17:37:40.421958925Z" level=info msg="StartContainer for \"bc4206ece2e83f65a6280a5cf7f560e15ced7e1f7dfcf239f20c0bf48255890b\"" May 27 17:37:40.423337 containerd[1593]: time="2025-05-27T17:37:40.423307748Z" level=info msg="connecting to shim bc4206ece2e83f65a6280a5cf7f560e15ced7e1f7dfcf239f20c0bf48255890b" address="unix:///run/containerd/s/6b78ea22040bed2e9c759416915f3677a761063e88813616d6cf21916dbe3222" protocol=ttrpc version=3 May 27 17:37:40.447216 systemd[1]: Started cri-containerd-bc4206ece2e83f65a6280a5cf7f560e15ced7e1f7dfcf239f20c0bf48255890b.scope - libcontainer container bc4206ece2e83f65a6280a5cf7f560e15ced7e1f7dfcf239f20c0bf48255890b. May 27 17:37:40.492732 containerd[1593]: time="2025-05-27T17:37:40.492677879Z" level=info msg="StartContainer for \"bc4206ece2e83f65a6280a5cf7f560e15ced7e1f7dfcf239f20c0bf48255890b\" returns successfully" May 27 17:37:40.549727 containerd[1593]: time="2025-05-27T17:37:40.549584638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-cdpr8,Uid:0b198d1e-d293-4b0a-8321-c9851f971924,Namespace:tigera-operator,Attempt:0,}" May 27 17:37:40.572362 containerd[1593]: time="2025-05-27T17:37:40.572307378Z" level=info msg="connecting to shim 238cf874bf6c72089327ab53d5335e51c25bc3ec66dcd55d7e0f392008f74924" address="unix:///run/containerd/s/21267bc93c49ffe7ab19bb96afbc608bc5eb8935b7d50cea300c3ab32f3d1aec" namespace=k8s.io protocol=ttrpc version=3 May 27 17:37:40.601232 systemd[1]: Started cri-containerd-238cf874bf6c72089327ab53d5335e51c25bc3ec66dcd55d7e0f392008f74924.scope - libcontainer container 238cf874bf6c72089327ab53d5335e51c25bc3ec66dcd55d7e0f392008f74924. May 27 17:37:40.673905 containerd[1593]: time="2025-05-27T17:37:40.673856030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-cdpr8,Uid:0b198d1e-d293-4b0a-8321-c9851f971924,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"238cf874bf6c72089327ab53d5335e51c25bc3ec66dcd55d7e0f392008f74924\"" May 27 17:37:40.675862 containerd[1593]: time="2025-05-27T17:37:40.675830450Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 17:37:40.963012 kubelet[2686]: I0527 17:37:40.962928 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b9ql8" podStartSLOduration=1.962907094 podStartE2EDuration="1.962907094s" podCreationTimestamp="2025-05-27 17:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:37:40.962822323 +0000 UTC m=+7.154381798" watchObservedRunningTime="2025-05-27 17:37:40.962907094 +0000 UTC m=+7.154466569" May 27 17:37:42.839007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175029673.mount: Deactivated successfully. May 27 17:37:43.765635 containerd[1593]: time="2025-05-27T17:37:43.765514731Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:43.766832 containerd[1593]: time="2025-05-27T17:37:43.766612883Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 17:37:43.767887 containerd[1593]: time="2025-05-27T17:37:43.767837974Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:43.770569 containerd[1593]: time="2025-05-27T17:37:43.770521360Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:43.771195 containerd[1593]: time="2025-05-27T17:37:43.771149701Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 3.095079426s" May 27 17:37:43.771195 containerd[1593]: time="2025-05-27T17:37:43.771189256Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 17:37:43.776482 containerd[1593]: time="2025-05-27T17:37:43.776436530Z" level=info msg="CreateContainer within sandbox \"238cf874bf6c72089327ab53d5335e51c25bc3ec66dcd55d7e0f392008f74924\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 17:37:43.788872 containerd[1593]: time="2025-05-27T17:37:43.788841700Z" level=info msg="Container 344de019b2b097c3866772d52410202273c950a14ba61be1972d83e95e3431bf: CDI devices from CRI Config.CDIDevices: []" May 27 17:37:43.932219 containerd[1593]: time="2025-05-27T17:37:43.932156026Z" level=info msg="CreateContainer within sandbox \"238cf874bf6c72089327ab53d5335e51c25bc3ec66dcd55d7e0f392008f74924\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"344de019b2b097c3866772d52410202273c950a14ba61be1972d83e95e3431bf\"" May 27 17:37:43.932679 containerd[1593]: time="2025-05-27T17:37:43.932649491Z" level=info msg="StartContainer for \"344de019b2b097c3866772d52410202273c950a14ba61be1972d83e95e3431bf\"" May 27 17:37:43.933669 containerd[1593]: time="2025-05-27T17:37:43.933632674Z" level=info msg="connecting to shim 344de019b2b097c3866772d52410202273c950a14ba61be1972d83e95e3431bf" address="unix:///run/containerd/s/21267bc93c49ffe7ab19bb96afbc608bc5eb8935b7d50cea300c3ab32f3d1aec" protocol=ttrpc version=3 May 27 17:37:43.999300 systemd[1]: Started cri-containerd-344de019b2b097c3866772d52410202273c950a14ba61be1972d83e95e3431bf.scope - libcontainer container 344de019b2b097c3866772d52410202273c950a14ba61be1972d83e95e3431bf. May 27 17:37:44.098803 containerd[1593]: time="2025-05-27T17:37:44.098667673Z" level=info msg="StartContainer for \"344de019b2b097c3866772d52410202273c950a14ba61be1972d83e95e3431bf\" returns successfully" May 27 17:37:44.414674 update_engine[1581]: I20250527 17:37:44.414505 1581 update_attempter.cc:509] Updating boot flags... May 27 17:37:44.963741 kubelet[2686]: I0527 17:37:44.963660 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-cdpr8" podStartSLOduration=1.866957684 podStartE2EDuration="4.96363933s" podCreationTimestamp="2025-05-27 17:37:40 +0000 UTC" firstStartedPulling="2025-05-27 17:37:40.675302056 +0000 UTC m=+6.866861521" lastFinishedPulling="2025-05-27 17:37:43.771983682 +0000 UTC m=+9.963543167" observedRunningTime="2025-05-27 17:37:44.963452977 +0000 UTC m=+11.155012462" watchObservedRunningTime="2025-05-27 17:37:44.96363933 +0000 UTC m=+11.155198805" May 27 17:37:49.247923 sudo[1794]: pam_unix(sudo:session): session closed for user root May 27 17:37:49.250095 sshd[1793]: Connection closed by 10.0.0.1 port 42274 May 27 17:37:49.251022 sshd-session[1791]: pam_unix(sshd:session): session closed for user core May 27 17:37:49.255729 systemd[1]: sshd@6-10.0.0.25:22-10.0.0.1:42274.service: Deactivated successfully. May 27 17:37:49.258961 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:37:49.259344 systemd[1]: session-7.scope: Consumed 5.499s CPU time, 224M memory peak. May 27 17:37:49.260865 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. May 27 17:37:49.263815 systemd-logind[1577]: Removed session 7. May 27 17:37:51.681998 systemd[1]: Created slice kubepods-besteffort-poda7293438_b79e_4238_8929_565ce3db16f9.slice - libcontainer container kubepods-besteffort-poda7293438_b79e_4238_8929_565ce3db16f9.slice. May 27 17:37:51.713402 kubelet[2686]: I0527 17:37:51.713324 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2r9z\" (UniqueName: \"kubernetes.io/projected/a7293438-b79e-4238-8929-565ce3db16f9-kube-api-access-t2r9z\") pod \"calico-typha-cf94fd9d9-jw2xc\" (UID: \"a7293438-b79e-4238-8929-565ce3db16f9\") " pod="calico-system/calico-typha-cf94fd9d9-jw2xc" May 27 17:37:51.713402 kubelet[2686]: I0527 17:37:51.713401 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7293438-b79e-4238-8929-565ce3db16f9-tigera-ca-bundle\") pod \"calico-typha-cf94fd9d9-jw2xc\" (UID: \"a7293438-b79e-4238-8929-565ce3db16f9\") " pod="calico-system/calico-typha-cf94fd9d9-jw2xc" May 27 17:37:51.713884 kubelet[2686]: I0527 17:37:51.713432 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a7293438-b79e-4238-8929-565ce3db16f9-typha-certs\") pod \"calico-typha-cf94fd9d9-jw2xc\" (UID: \"a7293438-b79e-4238-8929-565ce3db16f9\") " pod="calico-system/calico-typha-cf94fd9d9-jw2xc" May 27 17:37:51.986427 containerd[1593]: time="2025-05-27T17:37:51.986360858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf94fd9d9-jw2xc,Uid:a7293438-b79e-4238-8929-565ce3db16f9,Namespace:calico-system,Attempt:0,}" May 27 17:37:52.167538 containerd[1593]: time="2025-05-27T17:37:52.166578679Z" level=info msg="connecting to shim 08cc563efe5656d98382bfe475b78bea3e962321de1891a7d326991b268e5744" address="unix:///run/containerd/s/71ba007dc3266bae4dec7ca7f460e653a0f8117f175965a19c7bc2c2c520fdc1" namespace=k8s.io protocol=ttrpc version=3 May 27 17:37:52.193577 systemd[1]: Created slice kubepods-besteffort-pod3cc8b564_e3d8_43e3_aa46_5f55afd0e7b6.slice - libcontainer container kubepods-besteffort-pod3cc8b564_e3d8_43e3_aa46_5f55afd0e7b6.slice. May 27 17:37:52.216352 kubelet[2686]: I0527 17:37:52.216251 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-lib-modules\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.216667 kubelet[2686]: I0527 17:37:52.216519 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-cni-log-dir\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.216667 kubelet[2686]: I0527 17:37:52.216549 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mxf8\" (UniqueName: \"kubernetes.io/projected/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-kube-api-access-7mxf8\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.216667 kubelet[2686]: I0527 17:37:52.216592 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-cni-net-dir\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.216667 kubelet[2686]: I0527 17:37:52.216608 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-tigera-ca-bundle\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.216950 kubelet[2686]: I0527 17:37:52.216881 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-var-run-calico\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.219087 kubelet[2686]: I0527 17:37:52.216906 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-node-certs\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.219087 kubelet[2686]: I0527 17:37:52.217022 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-policysync\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.219087 kubelet[2686]: I0527 17:37:52.217041 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-var-lib-calico\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.219406 kubelet[2686]: I0527 17:37:52.219362 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-flexvol-driver-host\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.219544 kubelet[2686]: I0527 17:37:52.219507 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-cni-bin-dir\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.219676 kubelet[2686]: I0527 17:37:52.219529 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6-xtables-lock\") pod \"calico-node-dmk8f\" (UID: \"3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6\") " pod="calico-system/calico-node-dmk8f" May 27 17:37:52.227279 systemd[1]: Started cri-containerd-08cc563efe5656d98382bfe475b78bea3e962321de1891a7d326991b268e5744.scope - libcontainer container 08cc563efe5656d98382bfe475b78bea3e962321de1891a7d326991b268e5744. May 27 17:37:52.283765 kubelet[2686]: E0527 17:37:52.283564 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-228hb" podUID="fcd8288a-3a1a-4f74-9ffc-0a9589729432" May 27 17:37:52.285763 containerd[1593]: time="2025-05-27T17:37:52.285705705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf94fd9d9-jw2xc,Uid:a7293438-b79e-4238-8929-565ce3db16f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"08cc563efe5656d98382bfe475b78bea3e962321de1891a7d326991b268e5744\"" May 27 17:37:52.296634 containerd[1593]: time="2025-05-27T17:37:52.296588124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 17:37:52.321289 kubelet[2686]: I0527 17:37:52.320897 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fcd8288a-3a1a-4f74-9ffc-0a9589729432-varrun\") pod \"csi-node-driver-228hb\" (UID: \"fcd8288a-3a1a-4f74-9ffc-0a9589729432\") " pod="calico-system/csi-node-driver-228hb" May 27 17:37:52.322753 kubelet[2686]: I0527 17:37:52.322530 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fcd8288a-3a1a-4f74-9ffc-0a9589729432-registration-dir\") pod \"csi-node-driver-228hb\" (UID: \"fcd8288a-3a1a-4f74-9ffc-0a9589729432\") " pod="calico-system/csi-node-driver-228hb" May 27 17:37:52.322753 kubelet[2686]: I0527 17:37:52.322563 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skf79\" (UniqueName: \"kubernetes.io/projected/fcd8288a-3a1a-4f74-9ffc-0a9589729432-kube-api-access-skf79\") pod \"csi-node-driver-228hb\" (UID: \"fcd8288a-3a1a-4f74-9ffc-0a9589729432\") " pod="calico-system/csi-node-driver-228hb" May 27 17:37:52.322843 kubelet[2686]: I0527 17:37:52.322763 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fcd8288a-3a1a-4f74-9ffc-0a9589729432-socket-dir\") pod \"csi-node-driver-228hb\" (UID: \"fcd8288a-3a1a-4f74-9ffc-0a9589729432\") " pod="calico-system/csi-node-driver-228hb" May 27 17:37:52.323200 kubelet[2686]: I0527 17:37:52.323148 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fcd8288a-3a1a-4f74-9ffc-0a9589729432-kubelet-dir\") pod \"csi-node-driver-228hb\" (UID: \"fcd8288a-3a1a-4f74-9ffc-0a9589729432\") " pod="calico-system/csi-node-driver-228hb" May 27 17:37:52.333833 kubelet[2686]: E0527 17:37:52.333683 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.333833 kubelet[2686]: W0527 17:37:52.333712 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.333833 kubelet[2686]: E0527 17:37:52.333735 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.343019 kubelet[2686]: E0527 17:37:52.342987 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.343019 kubelet[2686]: W0527 17:37:52.343012 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.343121 kubelet[2686]: E0527 17:37:52.343030 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.424572 kubelet[2686]: E0527 17:37:52.424523 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.424572 kubelet[2686]: W0527 17:37:52.424556 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.424572 kubelet[2686]: E0527 17:37:52.424580 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.424908 kubelet[2686]: E0527 17:37:52.424879 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.424908 kubelet[2686]: W0527 17:37:52.424896 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.424908 kubelet[2686]: E0527 17:37:52.424907 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.425247 kubelet[2686]: E0527 17:37:52.425226 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.425247 kubelet[2686]: W0527 17:37:52.425241 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.425307 kubelet[2686]: E0527 17:37:52.425252 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.425620 kubelet[2686]: E0527 17:37:52.425589 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.425620 kubelet[2686]: W0527 17:37:52.425605 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.425620 kubelet[2686]: E0527 17:37:52.425616 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.425879 kubelet[2686]: E0527 17:37:52.425851 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.425879 kubelet[2686]: W0527 17:37:52.425867 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.425879 kubelet[2686]: E0527 17:37:52.425878 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.426199 kubelet[2686]: E0527 17:37:52.426170 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.426199 kubelet[2686]: W0527 17:37:52.426188 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.426262 kubelet[2686]: E0527 17:37:52.426201 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.426457 kubelet[2686]: E0527 17:37:52.426429 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.426457 kubelet[2686]: W0527 17:37:52.426445 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.426457 kubelet[2686]: E0527 17:37:52.426455 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.426709 kubelet[2686]: E0527 17:37:52.426682 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.426709 kubelet[2686]: W0527 17:37:52.426697 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.426709 kubelet[2686]: E0527 17:37:52.426708 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.426951 kubelet[2686]: E0527 17:37:52.426931 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.426951 kubelet[2686]: W0527 17:37:52.426945 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.426993 kubelet[2686]: E0527 17:37:52.426955 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.427224 kubelet[2686]: E0527 17:37:52.427196 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.427224 kubelet[2686]: W0527 17:37:52.427211 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.427224 kubelet[2686]: E0527 17:37:52.427221 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.427467 kubelet[2686]: E0527 17:37:52.427447 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.427467 kubelet[2686]: W0527 17:37:52.427462 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.427516 kubelet[2686]: E0527 17:37:52.427473 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.427709 kubelet[2686]: E0527 17:37:52.427690 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.427709 kubelet[2686]: W0527 17:37:52.427703 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.427762 kubelet[2686]: E0527 17:37:52.427713 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.427958 kubelet[2686]: E0527 17:37:52.427939 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.427958 kubelet[2686]: W0527 17:37:52.427951 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.428006 kubelet[2686]: E0527 17:37:52.427961 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.428188 kubelet[2686]: E0527 17:37:52.428170 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.428188 kubelet[2686]: W0527 17:37:52.428181 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.428246 kubelet[2686]: E0527 17:37:52.428188 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.428400 kubelet[2686]: E0527 17:37:52.428386 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.428400 kubelet[2686]: W0527 17:37:52.428396 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.428449 kubelet[2686]: E0527 17:37:52.428404 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.428618 kubelet[2686]: E0527 17:37:52.428603 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.428618 kubelet[2686]: W0527 17:37:52.428614 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.428672 kubelet[2686]: E0527 17:37:52.428623 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.430011 kubelet[2686]: E0527 17:37:52.429967 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.430011 kubelet[2686]: W0527 17:37:52.429984 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.430011 kubelet[2686]: E0527 17:37:52.429993 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.430271 kubelet[2686]: E0527 17:37:52.430226 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.430271 kubelet[2686]: W0527 17:37:52.430234 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.430271 kubelet[2686]: E0527 17:37:52.430242 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.430483 kubelet[2686]: E0527 17:37:52.430453 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.430483 kubelet[2686]: W0527 17:37:52.430466 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.430483 kubelet[2686]: E0527 17:37:52.430474 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.430647 kubelet[2686]: E0527 17:37:52.430631 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.430647 kubelet[2686]: W0527 17:37:52.430643 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.430647 kubelet[2686]: E0527 17:37:52.430651 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.430821 kubelet[2686]: E0527 17:37:52.430806 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.430821 kubelet[2686]: W0527 17:37:52.430817 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.430875 kubelet[2686]: E0527 17:37:52.430826 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.431036 kubelet[2686]: E0527 17:37:52.431013 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.431036 kubelet[2686]: W0527 17:37:52.431025 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.431036 kubelet[2686]: E0527 17:37:52.431033 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.431250 kubelet[2686]: E0527 17:37:52.431234 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.431250 kubelet[2686]: W0527 17:37:52.431245 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.431310 kubelet[2686]: E0527 17:37:52.431253 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.431568 kubelet[2686]: E0527 17:37:52.431551 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.431568 kubelet[2686]: W0527 17:37:52.431564 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.431651 kubelet[2686]: E0527 17:37:52.431574 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.432225 kubelet[2686]: E0527 17:37:52.432194 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.432225 kubelet[2686]: W0527 17:37:52.432218 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.432324 kubelet[2686]: E0527 17:37:52.432238 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.441133 kubelet[2686]: E0527 17:37:52.441102 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:52.441133 kubelet[2686]: W0527 17:37:52.441122 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:52.441296 kubelet[2686]: E0527 17:37:52.441141 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:52.500113 containerd[1593]: time="2025-05-27T17:37:52.500049942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dmk8f,Uid:3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6,Namespace:calico-system,Attempt:0,}" May 27 17:37:52.909929 containerd[1593]: time="2025-05-27T17:37:52.909865295Z" level=info msg="connecting to shim 1dd10a0279f1d712934ea29fc7a021348252817f3b9d7ca8eb410a7a74486149" address="unix:///run/containerd/s/5200eab89e8f1396b1a77d3292e2dd244bcdb73a8c82afb2843318e08bd62d38" namespace=k8s.io protocol=ttrpc version=3 May 27 17:37:52.940223 systemd[1]: Started cri-containerd-1dd10a0279f1d712934ea29fc7a021348252817f3b9d7ca8eb410a7a74486149.scope - libcontainer container 1dd10a0279f1d712934ea29fc7a021348252817f3b9d7ca8eb410a7a74486149. May 27 17:37:52.970932 containerd[1593]: time="2025-05-27T17:37:52.970885697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dmk8f,Uid:3cc8b564-e3d8-43e3-aa46-5f55afd0e7b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"1dd10a0279f1d712934ea29fc7a021348252817f3b9d7ca8eb410a7a74486149\"" May 27 17:37:53.909800 kubelet[2686]: E0527 17:37:53.909739 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-228hb" podUID="fcd8288a-3a1a-4f74-9ffc-0a9589729432" May 27 17:37:53.995688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215403581.mount: Deactivated successfully. May 27 17:37:54.416932 containerd[1593]: time="2025-05-27T17:37:54.416861423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:54.417606 containerd[1593]: time="2025-05-27T17:37:54.417510587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 17:37:54.418719 containerd[1593]: time="2025-05-27T17:37:54.418685522Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:54.420444 containerd[1593]: time="2025-05-27T17:37:54.420407939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:54.421127 containerd[1593]: time="2025-05-27T17:37:54.421064457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.124249926s" May 27 17:37:54.421127 containerd[1593]: time="2025-05-27T17:37:54.421121695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 17:37:54.422342 containerd[1593]: time="2025-05-27T17:37:54.422317068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 17:37:54.439417 containerd[1593]: time="2025-05-27T17:37:54.439335469Z" level=info msg="CreateContainer within sandbox \"08cc563efe5656d98382bfe475b78bea3e962321de1891a7d326991b268e5744\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 17:37:54.449114 containerd[1593]: time="2025-05-27T17:37:54.448929194Z" level=info msg="Container 2ce70180e383ba60df01aab2cd343d90f532c16df808e45933fba93e65e421b2: CDI devices from CRI Config.CDIDevices: []" May 27 17:37:54.458555 containerd[1593]: time="2025-05-27T17:37:54.458489416Z" level=info msg="CreateContainer within sandbox \"08cc563efe5656d98382bfe475b78bea3e962321de1891a7d326991b268e5744\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2ce70180e383ba60df01aab2cd343d90f532c16df808e45933fba93e65e421b2\"" May 27 17:37:54.459285 containerd[1593]: time="2025-05-27T17:37:54.459233590Z" level=info msg="StartContainer for \"2ce70180e383ba60df01aab2cd343d90f532c16df808e45933fba93e65e421b2\"" May 27 17:37:54.460775 containerd[1593]: time="2025-05-27T17:37:54.460737484Z" level=info msg="connecting to shim 2ce70180e383ba60df01aab2cd343d90f532c16df808e45933fba93e65e421b2" address="unix:///run/containerd/s/71ba007dc3266bae4dec7ca7f460e653a0f8117f175965a19c7bc2c2c520fdc1" protocol=ttrpc version=3 May 27 17:37:54.493247 systemd[1]: Started cri-containerd-2ce70180e383ba60df01aab2cd343d90f532c16df808e45933fba93e65e421b2.scope - libcontainer container 2ce70180e383ba60df01aab2cd343d90f532c16df808e45933fba93e65e421b2. May 27 17:37:54.548619 containerd[1593]: time="2025-05-27T17:37:54.548557293Z" level=info msg="StartContainer for \"2ce70180e383ba60df01aab2cd343d90f532c16df808e45933fba93e65e421b2\" returns successfully" May 27 17:37:55.013965 kubelet[2686]: E0527 17:37:55.013907 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.013965 kubelet[2686]: W0527 17:37:55.013932 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.013965 kubelet[2686]: E0527 17:37:55.013952 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.014480 kubelet[2686]: E0527 17:37:55.014153 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.014480 kubelet[2686]: W0527 17:37:55.014162 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.014480 kubelet[2686]: E0527 17:37:55.014171 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.014480 kubelet[2686]: E0527 17:37:55.014355 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.014480 kubelet[2686]: W0527 17:37:55.014362 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.014480 kubelet[2686]: E0527 17:37:55.014372 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.014627 kubelet[2686]: E0527 17:37:55.014562 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.014627 kubelet[2686]: W0527 17:37:55.014574 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.014627 kubelet[2686]: E0527 17:37:55.014583 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.014774 kubelet[2686]: E0527 17:37:55.014748 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.014774 kubelet[2686]: W0527 17:37:55.014760 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.014774 kubelet[2686]: E0527 17:37:55.014769 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.014945 kubelet[2686]: E0527 17:37:55.014929 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.014945 kubelet[2686]: W0527 17:37:55.014940 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.015007 kubelet[2686]: E0527 17:37:55.014948 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.015139 kubelet[2686]: E0527 17:37:55.015123 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.015139 kubelet[2686]: W0527 17:37:55.015134 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.015205 kubelet[2686]: E0527 17:37:55.015143 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.015380 kubelet[2686]: E0527 17:37:55.015337 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.015380 kubelet[2686]: W0527 17:37:55.015367 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.015380 kubelet[2686]: E0527 17:37:55.015389 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.015632 kubelet[2686]: E0527 17:37:55.015615 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.015632 kubelet[2686]: W0527 17:37:55.015627 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.015682 kubelet[2686]: E0527 17:37:55.015635 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.015791 kubelet[2686]: E0527 17:37:55.015776 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.015791 kubelet[2686]: W0527 17:37:55.015787 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.015850 kubelet[2686]: E0527 17:37:55.015795 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.015943 kubelet[2686]: E0527 17:37:55.015927 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.015943 kubelet[2686]: W0527 17:37:55.015939 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.015999 kubelet[2686]: E0527 17:37:55.015947 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.016130 kubelet[2686]: E0527 17:37:55.016113 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.016130 kubelet[2686]: W0527 17:37:55.016125 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.016205 kubelet[2686]: E0527 17:37:55.016134 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.016297 kubelet[2686]: E0527 17:37:55.016282 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.016297 kubelet[2686]: W0527 17:37:55.016293 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.016355 kubelet[2686]: E0527 17:37:55.016301 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.016463 kubelet[2686]: E0527 17:37:55.016448 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.016463 kubelet[2686]: W0527 17:37:55.016459 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.016515 kubelet[2686]: E0527 17:37:55.016468 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.016618 kubelet[2686]: E0527 17:37:55.016603 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.016618 kubelet[2686]: W0527 17:37:55.016613 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.016664 kubelet[2686]: E0527 17:37:55.016621 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.024830 kubelet[2686]: I0527 17:37:55.024770 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cf94fd9d9-jw2xc" podStartSLOduration=1.89541424 podStartE2EDuration="4.024759326s" podCreationTimestamp="2025-05-27 17:37:51 +0000 UTC" firstStartedPulling="2025-05-27 17:37:52.292718404 +0000 UTC m=+18.484277879" lastFinishedPulling="2025-05-27 17:37:54.42206349 +0000 UTC m=+20.613622965" observedRunningTime="2025-05-27 17:37:55.023491006 +0000 UTC m=+21.215050481" watchObservedRunningTime="2025-05-27 17:37:55.024759326 +0000 UTC m=+21.216318801" May 27 17:37:55.064952 kubelet[2686]: E0527 17:37:55.064910 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.064952 kubelet[2686]: W0527 17:37:55.064934 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.064952 kubelet[2686]: E0527 17:37:55.064953 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.065854 kubelet[2686]: E0527 17:37:55.065829 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.065854 kubelet[2686]: W0527 17:37:55.065841 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.065854 kubelet[2686]: E0527 17:37:55.065850 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.066116 kubelet[2686]: E0527 17:37:55.066064 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.066116 kubelet[2686]: W0527 17:37:55.066099 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.066116 kubelet[2686]: E0527 17:37:55.066109 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.066377 kubelet[2686]: E0527 17:37:55.066342 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.066377 kubelet[2686]: W0527 17:37:55.066357 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.066377 kubelet[2686]: E0527 17:37:55.066364 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.066574 kubelet[2686]: E0527 17:37:55.066541 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.066574 kubelet[2686]: W0527 17:37:55.066548 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.066574 kubelet[2686]: E0527 17:37:55.066555 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.066741 kubelet[2686]: E0527 17:37:55.066720 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.066741 kubelet[2686]: W0527 17:37:55.066730 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.066741 kubelet[2686]: E0527 17:37:55.066737 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.066986 kubelet[2686]: E0527 17:37:55.066966 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.066986 kubelet[2686]: W0527 17:37:55.066976 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.066986 kubelet[2686]: E0527 17:37:55.066984 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.067206 kubelet[2686]: E0527 17:37:55.067188 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.067206 kubelet[2686]: W0527 17:37:55.067198 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.067206 kubelet[2686]: E0527 17:37:55.067206 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.067405 kubelet[2686]: E0527 17:37:55.067391 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.067405 kubelet[2686]: W0527 17:37:55.067401 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.067456 kubelet[2686]: E0527 17:37:55.067409 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.067619 kubelet[2686]: E0527 17:37:55.067606 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.067619 kubelet[2686]: W0527 17:37:55.067614 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.067671 kubelet[2686]: E0527 17:37:55.067622 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.067820 kubelet[2686]: E0527 17:37:55.067799 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.067820 kubelet[2686]: W0527 17:37:55.067809 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.067820 kubelet[2686]: E0527 17:37:55.067817 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.068020 kubelet[2686]: E0527 17:37:55.068005 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.068020 kubelet[2686]: W0527 17:37:55.068014 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.068082 kubelet[2686]: E0527 17:37:55.068022 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.068378 kubelet[2686]: E0527 17:37:55.068350 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.068378 kubelet[2686]: W0527 17:37:55.068372 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.068441 kubelet[2686]: E0527 17:37:55.068391 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.068582 kubelet[2686]: E0527 17:37:55.068564 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.068582 kubelet[2686]: W0527 17:37:55.068575 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.068635 kubelet[2686]: E0527 17:37:55.068584 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.068813 kubelet[2686]: E0527 17:37:55.068790 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.068813 kubelet[2686]: W0527 17:37:55.068805 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.068901 kubelet[2686]: E0527 17:37:55.068818 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.069039 kubelet[2686]: E0527 17:37:55.069016 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.069039 kubelet[2686]: W0527 17:37:55.069028 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.069039 kubelet[2686]: E0527 17:37:55.069037 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.069254 kubelet[2686]: E0527 17:37:55.069237 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.069254 kubelet[2686]: W0527 17:37:55.069248 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.069305 kubelet[2686]: E0527 17:37:55.069257 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.069734 kubelet[2686]: E0527 17:37:55.069707 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:55.069734 kubelet[2686]: W0527 17:37:55.069722 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:55.069734 kubelet[2686]: E0527 17:37:55.069731 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:55.909640 kubelet[2686]: E0527 17:37:55.909569 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-228hb" podUID="fcd8288a-3a1a-4f74-9ffc-0a9589729432" May 27 17:37:55.980913 kubelet[2686]: I0527 17:37:55.980867 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:37:56.020040 containerd[1593]: time="2025-05-27T17:37:56.019956775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:56.020936 containerd[1593]: time="2025-05-27T17:37:56.020897287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 17:37:56.022083 kubelet[2686]: E0527 17:37:56.022026 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.022083 kubelet[2686]: W0527 17:37:56.022052 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.022083 kubelet[2686]: E0527 17:37:56.022095 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.022658 kubelet[2686]: E0527 17:37:56.022433 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.022658 kubelet[2686]: W0527 17:37:56.022444 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.022658 kubelet[2686]: E0527 17:37:56.022455 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.022737 kubelet[2686]: E0527 17:37:56.022670 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.022737 kubelet[2686]: W0527 17:37:56.022679 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.022737 kubelet[2686]: E0527 17:37:56.022687 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.023119 containerd[1593]: time="2025-05-27T17:37:56.022808928Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:56.023266 kubelet[2686]: E0527 17:37:56.023242 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.023266 kubelet[2686]: W0527 17:37:56.023257 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.023266 kubelet[2686]: E0527 17:37:56.023267 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.023767 kubelet[2686]: E0527 17:37:56.023722 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.023767 kubelet[2686]: W0527 17:37:56.023750 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.023874 kubelet[2686]: E0527 17:37:56.023774 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.024023 kubelet[2686]: E0527 17:37:56.023997 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.024023 kubelet[2686]: W0527 17:37:56.024011 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.024023 kubelet[2686]: E0527 17:37:56.024021 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.024244 kubelet[2686]: E0527 17:37:56.024228 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.024244 kubelet[2686]: W0527 17:37:56.024243 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.024317 kubelet[2686]: E0527 17:37:56.024254 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.024533 kubelet[2686]: E0527 17:37:56.024510 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.024533 kubelet[2686]: W0527 17:37:56.024527 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.024617 kubelet[2686]: E0527 17:37:56.024540 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.024783 kubelet[2686]: E0527 17:37:56.024765 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.024783 kubelet[2686]: W0527 17:37:56.024779 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.024844 kubelet[2686]: E0527 17:37:56.024790 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.025030 kubelet[2686]: E0527 17:37:56.024996 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.025030 kubelet[2686]: W0527 17:37:56.025009 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.025030 kubelet[2686]: E0527 17:37:56.025020 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.025132 containerd[1593]: time="2025-05-27T17:37:56.025012319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:37:56.025249 kubelet[2686]: E0527 17:37:56.025232 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.025249 kubelet[2686]: W0527 17:37:56.025246 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.025295 kubelet[2686]: E0527 17:37:56.025256 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.025477 kubelet[2686]: E0527 17:37:56.025461 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.025477 kubelet[2686]: W0527 17:37:56.025474 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.025518 kubelet[2686]: E0527 17:37:56.025484 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.025691 kubelet[2686]: E0527 17:37:56.025675 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.025691 kubelet[2686]: W0527 17:37:56.025688 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.025744 kubelet[2686]: E0527 17:37:56.025699 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.025909 kubelet[2686]: E0527 17:37:56.025893 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.025909 kubelet[2686]: W0527 17:37:56.025906 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.025959 kubelet[2686]: E0527 17:37:56.025915 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.025990 containerd[1593]: time="2025-05-27T17:37:56.025899922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.603553799s" May 27 17:37:56.025990 containerd[1593]: time="2025-05-27T17:37:56.025930208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 17:37:56.026272 kubelet[2686]: E0527 17:37:56.026257 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.026272 kubelet[2686]: W0527 17:37:56.026269 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.026331 kubelet[2686]: E0527 17:37:56.026279 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.031466 containerd[1593]: time="2025-05-27T17:37:56.031424399Z" level=info msg="CreateContainer within sandbox \"1dd10a0279f1d712934ea29fc7a021348252817f3b9d7ca8eb410a7a74486149\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 17:37:56.041699 containerd[1593]: time="2025-05-27T17:37:56.041638572Z" level=info msg="Container f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8: CDI devices from CRI Config.CDIDevices: []" May 27 17:37:56.053191 containerd[1593]: time="2025-05-27T17:37:56.053145600Z" level=info msg="CreateContainer within sandbox \"1dd10a0279f1d712934ea29fc7a021348252817f3b9d7ca8eb410a7a74486149\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8\"" May 27 17:37:56.053717 containerd[1593]: time="2025-05-27T17:37:56.053683223Z" level=info msg="StartContainer for \"f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8\"" May 27 17:37:56.055370 containerd[1593]: time="2025-05-27T17:37:56.055330706Z" level=info msg="connecting to shim f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8" address="unix:///run/containerd/s/5200eab89e8f1396b1a77d3292e2dd244bcdb73a8c82afb2843318e08bd62d38" protocol=ttrpc version=3 May 27 17:37:56.076096 kubelet[2686]: E0527 17:37:56.073566 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076096 kubelet[2686]: W0527 17:37:56.073588 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076096 kubelet[2686]: E0527 17:37:56.073608 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076096 kubelet[2686]: E0527 17:37:56.073812 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076096 kubelet[2686]: W0527 17:37:56.073819 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076096 kubelet[2686]: E0527 17:37:56.073827 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076096 kubelet[2686]: E0527 17:37:56.073999 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076096 kubelet[2686]: W0527 17:37:56.074006 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076096 kubelet[2686]: E0527 17:37:56.074013 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076096 kubelet[2686]: E0527 17:37:56.074239 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076439 kubelet[2686]: W0527 17:37:56.074246 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076439 kubelet[2686]: E0527 17:37:56.074255 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076439 kubelet[2686]: E0527 17:37:56.074457 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076439 kubelet[2686]: W0527 17:37:56.074464 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076439 kubelet[2686]: E0527 17:37:56.074471 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076439 kubelet[2686]: E0527 17:37:56.074638 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076439 kubelet[2686]: W0527 17:37:56.074645 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076439 kubelet[2686]: E0527 17:37:56.074652 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076439 kubelet[2686]: E0527 17:37:56.074851 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076439 kubelet[2686]: W0527 17:37:56.074858 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076652 kubelet[2686]: E0527 17:37:56.074866 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076652 kubelet[2686]: E0527 17:37:56.075248 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076652 kubelet[2686]: W0527 17:37:56.075256 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076652 kubelet[2686]: E0527 17:37:56.075264 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076652 kubelet[2686]: E0527 17:37:56.075474 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076652 kubelet[2686]: W0527 17:37:56.075481 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076652 kubelet[2686]: E0527 17:37:56.075488 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076652 kubelet[2686]: E0527 17:37:56.075679 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076652 kubelet[2686]: W0527 17:37:56.075685 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076652 kubelet[2686]: E0527 17:37:56.075692 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076893 kubelet[2686]: E0527 17:37:56.075879 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076893 kubelet[2686]: W0527 17:37:56.075886 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076893 kubelet[2686]: E0527 17:37:56.075893 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076893 kubelet[2686]: E0527 17:37:56.076118 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076893 kubelet[2686]: W0527 17:37:56.076126 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076893 kubelet[2686]: E0527 17:37:56.076133 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076893 kubelet[2686]: E0527 17:37:56.076537 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.076893 kubelet[2686]: W0527 17:37:56.076545 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.076893 kubelet[2686]: E0527 17:37:56.076553 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.076893 kubelet[2686]: E0527 17:37:56.076733 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.078482 kubelet[2686]: W0527 17:37:56.076742 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.078482 kubelet[2686]: E0527 17:37:56.076749 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.078482 kubelet[2686]: E0527 17:37:56.076921 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.078482 kubelet[2686]: W0527 17:37:56.076928 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.078482 kubelet[2686]: E0527 17:37:56.076936 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.078482 kubelet[2686]: E0527 17:37:56.077155 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.078482 kubelet[2686]: W0527 17:37:56.077163 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.078482 kubelet[2686]: E0527 17:37:56.077170 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.078482 kubelet[2686]: E0527 17:37:56.077367 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.078482 kubelet[2686]: W0527 17:37:56.077376 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.078218 systemd[1]: Started cri-containerd-f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8.scope - libcontainer container f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8. May 27 17:37:56.078924 kubelet[2686]: E0527 17:37:56.077383 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.078924 kubelet[2686]: E0527 17:37:56.077668 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:37:56.078924 kubelet[2686]: W0527 17:37:56.077675 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:37:56.078924 kubelet[2686]: E0527 17:37:56.077683 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:37:56.125503 containerd[1593]: time="2025-05-27T17:37:56.125448623Z" level=info msg="StartContainer for \"f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8\" returns successfully" May 27 17:37:56.138445 systemd[1]: cri-containerd-f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8.scope: Deactivated successfully. May 27 17:37:56.138856 systemd[1]: cri-containerd-f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8.scope: Consumed 42ms CPU time, 6.3M memory peak, 4.6M written to disk. May 27 17:37:56.141866 containerd[1593]: time="2025-05-27T17:37:56.141758360Z" level=info msg="received exit event container_id:\"f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8\" id:\"f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8\" pid:3381 exited_at:{seconds:1748367476 nanos:141252837}" May 27 17:37:56.142025 containerd[1593]: time="2025-05-27T17:37:56.141861383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8\" id:\"f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8\" pid:3381 exited_at:{seconds:1748367476 nanos:141252837}" May 27 17:37:56.165971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6b98b19a1381cee6ee44dc200eb0b2eece247839cf382d127cb5f7da68cbcd8-rootfs.mount: Deactivated successfully. May 27 17:37:57.911875 kubelet[2686]: E0527 17:37:57.911809 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-228hb" podUID="fcd8288a-3a1a-4f74-9ffc-0a9589729432" May 27 17:37:57.988771 containerd[1593]: time="2025-05-27T17:37:57.988709838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 17:37:59.911563 kubelet[2686]: E0527 17:37:59.911122 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-228hb" podUID="fcd8288a-3a1a-4f74-9ffc-0a9589729432" May 27 17:38:00.653688 containerd[1593]: time="2025-05-27T17:38:00.653619420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:00.654411 containerd[1593]: time="2025-05-27T17:38:00.654370273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 17:38:00.655660 containerd[1593]: time="2025-05-27T17:38:00.655609175Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:00.658144 containerd[1593]: time="2025-05-27T17:38:00.658105804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:00.658897 containerd[1593]: time="2025-05-27T17:38:00.658853211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 2.670094871s" May 27 17:38:00.658897 containerd[1593]: time="2025-05-27T17:38:00.658892765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 17:38:00.664961 containerd[1593]: time="2025-05-27T17:38:00.664905050Z" level=info msg="CreateContainer within sandbox \"1dd10a0279f1d712934ea29fc7a021348252817f3b9d7ca8eb410a7a74486149\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 17:38:00.675130 containerd[1593]: time="2025-05-27T17:38:00.675055422Z" level=info msg="Container 0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:00.687682 containerd[1593]: time="2025-05-27T17:38:00.687604609Z" level=info msg="CreateContainer within sandbox \"1dd10a0279f1d712934ea29fc7a021348252817f3b9d7ca8eb410a7a74486149\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d\"" May 27 17:38:00.688347 containerd[1593]: time="2025-05-27T17:38:00.688300950Z" level=info msg="StartContainer for \"0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d\"" May 27 17:38:00.690451 containerd[1593]: time="2025-05-27T17:38:00.690369193Z" level=info msg="connecting to shim 0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d" address="unix:///run/containerd/s/5200eab89e8f1396b1a77d3292e2dd244bcdb73a8c82afb2843318e08bd62d38" protocol=ttrpc version=3 May 27 17:38:00.722357 systemd[1]: Started cri-containerd-0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d.scope - libcontainer container 0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d. May 27 17:38:00.773329 containerd[1593]: time="2025-05-27T17:38:00.773276892Z" level=info msg="StartContainer for \"0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d\" returns successfully" May 27 17:38:01.786128 systemd[1]: cri-containerd-0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d.scope: Deactivated successfully. May 27 17:38:01.786528 systemd[1]: cri-containerd-0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d.scope: Consumed 644ms CPU time, 178.8M memory peak, 5.6M read from disk, 170.9M written to disk. May 27 17:38:01.787652 containerd[1593]: time="2025-05-27T17:38:01.787600685Z" level=info msg="received exit event container_id:\"0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d\" id:\"0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d\" pid:3440 exited_at:{seconds:1748367481 nanos:787348841}" May 27 17:38:01.788123 containerd[1593]: time="2025-05-27T17:38:01.788064838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d\" id:\"0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d\" pid:3440 exited_at:{seconds:1748367481 nanos:787348841}" May 27 17:38:01.831445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fbdcbb4003ab328dc65694d2f00f0de91190d17a8ab96e303c9b20661b5e74d-rootfs.mount: Deactivated successfully. May 27 17:38:01.909870 kubelet[2686]: E0527 17:38:01.909334 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-228hb" podUID="fcd8288a-3a1a-4f74-9ffc-0a9589729432" May 27 17:38:01.939207 kubelet[2686]: I0527 17:38:01.939118 2686 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:38:02.191614 systemd[1]: Created slice kubepods-besteffort-podb005d8ad_4da9_4276_9016_e8cce20cb81b.slice - libcontainer container kubepods-besteffort-podb005d8ad_4da9_4276_9016_e8cce20cb81b.slice. May 27 17:38:02.199720 systemd[1]: Created slice kubepods-burstable-pod093120ac_0553_49ef_a766_a2f9173d2bc1.slice - libcontainer container kubepods-burstable-pod093120ac_0553_49ef_a766_a2f9173d2bc1.slice. May 27 17:38:02.207805 systemd[1]: Created slice kubepods-besteffort-pod94675efa_847e_427a_824d_b12d79137e81.slice - libcontainer container kubepods-besteffort-pod94675efa_847e_427a_824d_b12d79137e81.slice. May 27 17:38:02.214158 systemd[1]: Created slice kubepods-besteffort-podea297b7e_9a74_4e9e_a08f_ee2c202087d1.slice - libcontainer container kubepods-besteffort-podea297b7e_9a74_4e9e_a08f_ee2c202087d1.slice. May 27 17:38:02.222532 systemd[1]: Created slice kubepods-besteffort-podd27d1841_eda2_4baf_a093_ef4da86ab409.slice - libcontainer container kubepods-besteffort-podd27d1841_eda2_4baf_a093_ef4da86ab409.slice. May 27 17:38:02.227462 systemd[1]: Created slice kubepods-besteffort-pod76b8a2b2_e971_4c11_be31_8a11df3e321a.slice - libcontainer container kubepods-besteffort-pod76b8a2b2_e971_4c11_be31_8a11df3e321a.slice. May 27 17:38:02.235529 systemd[1]: Created slice kubepods-burstable-podb06ffdf2_146c_4120_a1e6_0c4f3555105e.slice - libcontainer container kubepods-burstable-podb06ffdf2_146c_4120_a1e6_0c4f3555105e.slice. May 27 17:38:02.317157 kubelet[2686]: I0527 17:38:02.317102 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94675efa-847e-427a-824d-b12d79137e81-whisker-backend-key-pair\") pod \"whisker-6dfc7bbd78-jxb8h\" (UID: \"94675efa-847e-427a-824d-b12d79137e81\") " pod="calico-system/whisker-6dfc7bbd78-jxb8h" May 27 17:38:02.317157 kubelet[2686]: I0527 17:38:02.317160 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/093120ac-0553-49ef-a766-a2f9173d2bc1-config-volume\") pod \"coredns-674b8bbfcf-w2htf\" (UID: \"093120ac-0553-49ef-a766-a2f9173d2bc1\") " pod="kube-system/coredns-674b8bbfcf-w2htf" May 27 17:38:02.317382 kubelet[2686]: I0527 17:38:02.317186 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d27d1841-eda2-4baf-a093-ef4da86ab409-config\") pod \"goldmane-78d55f7ddc-zdrnc\" (UID: \"d27d1841-eda2-4baf-a093-ef4da86ab409\") " pod="calico-system/goldmane-78d55f7ddc-zdrnc" May 27 17:38:02.317382 kubelet[2686]: I0527 17:38:02.317209 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b06ffdf2-146c-4120-a1e6-0c4f3555105e-config-volume\") pod \"coredns-674b8bbfcf-g5ppm\" (UID: \"b06ffdf2-146c-4120-a1e6-0c4f3555105e\") " pod="kube-system/coredns-674b8bbfcf-g5ppm" May 27 17:38:02.317382 kubelet[2686]: I0527 17:38:02.317259 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62dbz\" (UniqueName: \"kubernetes.io/projected/76b8a2b2-e971-4c11-be31-8a11df3e321a-kube-api-access-62dbz\") pod \"calico-apiserver-54cb66b499-csx8g\" (UID: \"76b8a2b2-e971-4c11-be31-8a11df3e321a\") " pod="calico-apiserver/calico-apiserver-54cb66b499-csx8g" May 27 17:38:02.317382 kubelet[2686]: I0527 17:38:02.317306 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94675efa-847e-427a-824d-b12d79137e81-whisker-ca-bundle\") pod \"whisker-6dfc7bbd78-jxb8h\" (UID: \"94675efa-847e-427a-824d-b12d79137e81\") " pod="calico-system/whisker-6dfc7bbd78-jxb8h" May 27 17:38:02.317523 kubelet[2686]: I0527 17:38:02.317434 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d27d1841-eda2-4baf-a093-ef4da86ab409-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-zdrnc\" (UID: \"d27d1841-eda2-4baf-a093-ef4da86ab409\") " pod="calico-system/goldmane-78d55f7ddc-zdrnc" May 27 17:38:02.317569 kubelet[2686]: I0527 17:38:02.317530 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd8w8\" (UniqueName: \"kubernetes.io/projected/d27d1841-eda2-4baf-a093-ef4da86ab409-kube-api-access-qd8w8\") pod \"goldmane-78d55f7ddc-zdrnc\" (UID: \"d27d1841-eda2-4baf-a093-ef4da86ab409\") " pod="calico-system/goldmane-78d55f7ddc-zdrnc" May 27 17:38:02.317612 kubelet[2686]: I0527 17:38:02.317583 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7778j\" (UniqueName: \"kubernetes.io/projected/093120ac-0553-49ef-a766-a2f9173d2bc1-kube-api-access-7778j\") pod \"coredns-674b8bbfcf-w2htf\" (UID: \"093120ac-0553-49ef-a766-a2f9173d2bc1\") " pod="kube-system/coredns-674b8bbfcf-w2htf" May 27 17:38:02.317651 kubelet[2686]: I0527 17:38:02.317613 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/76b8a2b2-e971-4c11-be31-8a11df3e321a-calico-apiserver-certs\") pod \"calico-apiserver-54cb66b499-csx8g\" (UID: \"76b8a2b2-e971-4c11-be31-8a11df3e321a\") " pod="calico-apiserver/calico-apiserver-54cb66b499-csx8g" May 27 17:38:02.317651 kubelet[2686]: I0527 17:38:02.317637 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d27d1841-eda2-4baf-a093-ef4da86ab409-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-zdrnc\" (UID: \"d27d1841-eda2-4baf-a093-ef4da86ab409\") " pod="calico-system/goldmane-78d55f7ddc-zdrnc" May 27 17:38:02.317722 kubelet[2686]: I0527 17:38:02.317660 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b005d8ad-4da9-4276-9016-e8cce20cb81b-tigera-ca-bundle\") pod \"calico-kube-controllers-6884bd4ddb-5fcc6\" (UID: \"b005d8ad-4da9-4276-9016-e8cce20cb81b\") " pod="calico-system/calico-kube-controllers-6884bd4ddb-5fcc6" May 27 17:38:02.317722 kubelet[2686]: I0527 17:38:02.317682 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea297b7e-9a74-4e9e-a08f-ee2c202087d1-calico-apiserver-certs\") pod \"calico-apiserver-54cb66b499-cvk4h\" (UID: \"ea297b7e-9a74-4e9e-a08f-ee2c202087d1\") " pod="calico-apiserver/calico-apiserver-54cb66b499-cvk4h" May 27 17:38:02.317722 kubelet[2686]: I0527 17:38:02.317707 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvlzp\" (UniqueName: \"kubernetes.io/projected/94675efa-847e-427a-824d-b12d79137e81-kube-api-access-rvlzp\") pod \"whisker-6dfc7bbd78-jxb8h\" (UID: \"94675efa-847e-427a-824d-b12d79137e81\") " pod="calico-system/whisker-6dfc7bbd78-jxb8h" May 27 17:38:02.317722 kubelet[2686]: I0527 17:38:02.317721 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wh64\" (UniqueName: \"kubernetes.io/projected/b06ffdf2-146c-4120-a1e6-0c4f3555105e-kube-api-access-8wh64\") pod \"coredns-674b8bbfcf-g5ppm\" (UID: \"b06ffdf2-146c-4120-a1e6-0c4f3555105e\") " pod="kube-system/coredns-674b8bbfcf-g5ppm" May 27 17:38:02.317852 kubelet[2686]: I0527 17:38:02.317735 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgrrq\" (UniqueName: \"kubernetes.io/projected/b005d8ad-4da9-4276-9016-e8cce20cb81b-kube-api-access-xgrrq\") pod \"calico-kube-controllers-6884bd4ddb-5fcc6\" (UID: \"b005d8ad-4da9-4276-9016-e8cce20cb81b\") " pod="calico-system/calico-kube-controllers-6884bd4ddb-5fcc6" May 27 17:38:02.317852 kubelet[2686]: I0527 17:38:02.317756 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfq4z\" (UniqueName: \"kubernetes.io/projected/ea297b7e-9a74-4e9e-a08f-ee2c202087d1-kube-api-access-hfq4z\") pod \"calico-apiserver-54cb66b499-cvk4h\" (UID: \"ea297b7e-9a74-4e9e-a08f-ee2c202087d1\") " pod="calico-apiserver/calico-apiserver-54cb66b499-cvk4h" May 27 17:38:02.497833 containerd[1593]: time="2025-05-27T17:38:02.497774826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6884bd4ddb-5fcc6,Uid:b005d8ad-4da9-4276-9016-e8cce20cb81b,Namespace:calico-system,Attempt:0,}" May 27 17:38:02.505355 containerd[1593]: time="2025-05-27T17:38:02.505321933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w2htf,Uid:093120ac-0553-49ef-a766-a2f9173d2bc1,Namespace:kube-system,Attempt:0,}" May 27 17:38:02.513561 containerd[1593]: time="2025-05-27T17:38:02.513475441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dfc7bbd78-jxb8h,Uid:94675efa-847e-427a-824d-b12d79137e81,Namespace:calico-system,Attempt:0,}" May 27 17:38:02.523033 containerd[1593]: time="2025-05-27T17:38:02.522973187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cb66b499-cvk4h,Uid:ea297b7e-9a74-4e9e-a08f-ee2c202087d1,Namespace:calico-apiserver,Attempt:0,}" May 27 17:38:02.526293 containerd[1593]: time="2025-05-27T17:38:02.526265211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-zdrnc,Uid:d27d1841-eda2-4baf-a093-ef4da86ab409,Namespace:calico-system,Attempt:0,}" May 27 17:38:02.536576 containerd[1593]: time="2025-05-27T17:38:02.536550760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cb66b499-csx8g,Uid:76b8a2b2-e971-4c11-be31-8a11df3e321a,Namespace:calico-apiserver,Attempt:0,}" May 27 17:38:02.539276 containerd[1593]: time="2025-05-27T17:38:02.538947198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g5ppm,Uid:b06ffdf2-146c-4120-a1e6-0c4f3555105e,Namespace:kube-system,Attempt:0,}" May 27 17:38:02.661034 containerd[1593]: time="2025-05-27T17:38:02.660983629Z" level=error msg="Failed to destroy network for sandbox \"8a2c35c883be3209011742cf111b7262d76a79d78d64adeedcad4d685db10de1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.664196 containerd[1593]: time="2025-05-27T17:38:02.664157199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cb66b499-cvk4h,Uid:ea297b7e-9a74-4e9e-a08f-ee2c202087d1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2c35c883be3209011742cf111b7262d76a79d78d64adeedcad4d685db10de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.665449 containerd[1593]: time="2025-05-27T17:38:02.665419423Z" level=error msg="Failed to destroy network for sandbox \"baf6a966fa30c7f93d853f313bcd040553c256a6bb574aea096ff79a42b55708\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.669987 containerd[1593]: time="2025-05-27T17:38:02.669958032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w2htf,Uid:093120ac-0553-49ef-a766-a2f9173d2bc1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"baf6a966fa30c7f93d853f313bcd040553c256a6bb574aea096ff79a42b55708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.685021 kubelet[2686]: E0527 17:38:02.684957 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"baf6a966fa30c7f93d853f313bcd040553c256a6bb574aea096ff79a42b55708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.685021 kubelet[2686]: E0527 17:38:02.684992 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2c35c883be3209011742cf111b7262d76a79d78d64adeedcad4d685db10de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.685256 kubelet[2686]: E0527 17:38:02.685044 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"baf6a966fa30c7f93d853f313bcd040553c256a6bb574aea096ff79a42b55708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w2htf" May 27 17:38:02.685256 kubelet[2686]: E0527 17:38:02.685134 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"baf6a966fa30c7f93d853f313bcd040553c256a6bb574aea096ff79a42b55708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w2htf" May 27 17:38:02.685256 kubelet[2686]: E0527 17:38:02.685195 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-w2htf_kube-system(093120ac-0553-49ef-a766-a2f9173d2bc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-w2htf_kube-system(093120ac-0553-49ef-a766-a2f9173d2bc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"baf6a966fa30c7f93d853f313bcd040553c256a6bb574aea096ff79a42b55708\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-w2htf" podUID="093120ac-0553-49ef-a766-a2f9173d2bc1" May 27 17:38:02.685687 kubelet[2686]: E0527 17:38:02.685649 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2c35c883be3209011742cf111b7262d76a79d78d64adeedcad4d685db10de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54cb66b499-cvk4h" May 27 17:38:02.685757 kubelet[2686]: E0527 17:38:02.685687 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2c35c883be3209011742cf111b7262d76a79d78d64adeedcad4d685db10de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54cb66b499-cvk4h" May 27 17:38:02.685757 kubelet[2686]: E0527 17:38:02.685738 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54cb66b499-cvk4h_calico-apiserver(ea297b7e-9a74-4e9e-a08f-ee2c202087d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54cb66b499-cvk4h_calico-apiserver(ea297b7e-9a74-4e9e-a08f-ee2c202087d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a2c35c883be3209011742cf111b7262d76a79d78d64adeedcad4d685db10de1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54cb66b499-cvk4h" podUID="ea297b7e-9a74-4e9e-a08f-ee2c202087d1" May 27 17:38:02.686862 containerd[1593]: time="2025-05-27T17:38:02.686817397Z" level=error msg="Failed to destroy network for sandbox \"669b39e55248fbd489e6911cd1379ee53c5f8406d07c94834b76c5aaff792b44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.688642 containerd[1593]: time="2025-05-27T17:38:02.688593768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6884bd4ddb-5fcc6,Uid:b005d8ad-4da9-4276-9016-e8cce20cb81b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"669b39e55248fbd489e6911cd1379ee53c5f8406d07c94834b76c5aaff792b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.688986 kubelet[2686]: E0527 17:38:02.688945 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669b39e55248fbd489e6911cd1379ee53c5f8406d07c94834b76c5aaff792b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.689052 kubelet[2686]: E0527 17:38:02.688996 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669b39e55248fbd489e6911cd1379ee53c5f8406d07c94834b76c5aaff792b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6884bd4ddb-5fcc6" May 27 17:38:02.689052 kubelet[2686]: E0527 17:38:02.689024 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669b39e55248fbd489e6911cd1379ee53c5f8406d07c94834b76c5aaff792b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6884bd4ddb-5fcc6" May 27 17:38:02.689241 kubelet[2686]: E0527 17:38:02.689091 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6884bd4ddb-5fcc6_calico-system(b005d8ad-4da9-4276-9016-e8cce20cb81b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6884bd4ddb-5fcc6_calico-system(b005d8ad-4da9-4276-9016-e8cce20cb81b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"669b39e55248fbd489e6911cd1379ee53c5f8406d07c94834b76c5aaff792b44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6884bd4ddb-5fcc6" podUID="b005d8ad-4da9-4276-9016-e8cce20cb81b" May 27 17:38:02.694156 containerd[1593]: time="2025-05-27T17:38:02.694111017Z" level=error msg="Failed to destroy network for sandbox \"cb5be8a81dad1df7765b8eccbed96e3345bb956f235fe66283cd7a44082ce9c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.696652 containerd[1593]: time="2025-05-27T17:38:02.696591203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dfc7bbd78-jxb8h,Uid:94675efa-847e-427a-824d-b12d79137e81,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb5be8a81dad1df7765b8eccbed96e3345bb956f235fe66283cd7a44082ce9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.696856 kubelet[2686]: E0527 17:38:02.696807 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb5be8a81dad1df7765b8eccbed96e3345bb956f235fe66283cd7a44082ce9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.697012 kubelet[2686]: E0527 17:38:02.696865 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb5be8a81dad1df7765b8eccbed96e3345bb956f235fe66283cd7a44082ce9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dfc7bbd78-jxb8h" May 27 17:38:02.697012 kubelet[2686]: E0527 17:38:02.696889 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb5be8a81dad1df7765b8eccbed96e3345bb956f235fe66283cd7a44082ce9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dfc7bbd78-jxb8h" May 27 17:38:02.697012 kubelet[2686]: E0527 17:38:02.696931 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dfc7bbd78-jxb8h_calico-system(94675efa-847e-427a-824d-b12d79137e81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dfc7bbd78-jxb8h_calico-system(94675efa-847e-427a-824d-b12d79137e81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb5be8a81dad1df7765b8eccbed96e3345bb956f235fe66283cd7a44082ce9c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dfc7bbd78-jxb8h" podUID="94675efa-847e-427a-824d-b12d79137e81" May 27 17:38:02.697303 containerd[1593]: time="2025-05-27T17:38:02.696699657Z" level=error msg="Failed to destroy network for sandbox \"bca8135a8272e3fbb5a90ba72642ff36045bbca2e447a628b9f0708c44f4e2ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.700159 containerd[1593]: time="2025-05-27T17:38:02.699896360Z" level=error msg="Failed to destroy network for sandbox \"4f6caeddc9c3dd24ee940af8203b63b5540ff6b704510e58fff92b608ae8784d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.700159 containerd[1593]: time="2025-05-27T17:38:02.700132925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cb66b499-csx8g,Uid:76b8a2b2-e971-4c11-be31-8a11df3e321a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca8135a8272e3fbb5a90ba72642ff36045bbca2e447a628b9f0708c44f4e2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.700612 kubelet[2686]: E0527 17:38:02.700570 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca8135a8272e3fbb5a90ba72642ff36045bbca2e447a628b9f0708c44f4e2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.700689 kubelet[2686]: E0527 17:38:02.700625 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca8135a8272e3fbb5a90ba72642ff36045bbca2e447a628b9f0708c44f4e2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54cb66b499-csx8g" May 27 17:38:02.700689 kubelet[2686]: E0527 17:38:02.700651 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca8135a8272e3fbb5a90ba72642ff36045bbca2e447a628b9f0708c44f4e2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54cb66b499-csx8g" May 27 17:38:02.700749 kubelet[2686]: E0527 17:38:02.700709 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54cb66b499-csx8g_calico-apiserver(76b8a2b2-e971-4c11-be31-8a11df3e321a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54cb66b499-csx8g_calico-apiserver(76b8a2b2-e971-4c11-be31-8a11df3e321a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bca8135a8272e3fbb5a90ba72642ff36045bbca2e447a628b9f0708c44f4e2ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54cb66b499-csx8g" podUID="76b8a2b2-e971-4c11-be31-8a11df3e321a" May 27 17:38:02.701315 containerd[1593]: time="2025-05-27T17:38:02.701211234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-zdrnc,Uid:d27d1841-eda2-4baf-a093-ef4da86ab409,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6caeddc9c3dd24ee940af8203b63b5540ff6b704510e58fff92b608ae8784d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.701787 kubelet[2686]: E0527 17:38:02.701754 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6caeddc9c3dd24ee940af8203b63b5540ff6b704510e58fff92b608ae8784d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.701825 kubelet[2686]: E0527 17:38:02.701803 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6caeddc9c3dd24ee940af8203b63b5540ff6b704510e58fff92b608ae8784d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-zdrnc" May 27 17:38:02.701871 kubelet[2686]: E0527 17:38:02.701825 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6caeddc9c3dd24ee940af8203b63b5540ff6b704510e58fff92b608ae8784d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-zdrnc" May 27 17:38:02.702185 kubelet[2686]: E0527 17:38:02.702155 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-zdrnc_calico-system(d27d1841-eda2-4baf-a093-ef4da86ab409)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-zdrnc_calico-system(d27d1841-eda2-4baf-a093-ef4da86ab409)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f6caeddc9c3dd24ee940af8203b63b5540ff6b704510e58fff92b608ae8784d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-zdrnc" podUID="d27d1841-eda2-4baf-a093-ef4da86ab409" May 27 17:38:02.712201 containerd[1593]: time="2025-05-27T17:38:02.712152496Z" level=error msg="Failed to destroy network for sandbox \"bcbea17b8d3d700b883a4d7ad621db821bd17877dba65b453d309bce4a1053fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.713327 containerd[1593]: time="2025-05-27T17:38:02.713278424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g5ppm,Uid:b06ffdf2-146c-4120-a1e6-0c4f3555105e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbea17b8d3d700b883a4d7ad621db821bd17877dba65b453d309bce4a1053fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.713482 kubelet[2686]: E0527 17:38:02.713435 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbea17b8d3d700b883a4d7ad621db821bd17877dba65b453d309bce4a1053fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:02.713482 kubelet[2686]: E0527 17:38:02.713488 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbea17b8d3d700b883a4d7ad621db821bd17877dba65b453d309bce4a1053fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-g5ppm" May 27 17:38:02.713677 kubelet[2686]: E0527 17:38:02.713510 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbea17b8d3d700b883a4d7ad621db821bd17877dba65b453d309bce4a1053fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-g5ppm" May 27 17:38:02.713677 kubelet[2686]: E0527 17:38:02.713571 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-g5ppm_kube-system(b06ffdf2-146c-4120-a1e6-0c4f3555105e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-g5ppm_kube-system(b06ffdf2-146c-4120-a1e6-0c4f3555105e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcbea17b8d3d700b883a4d7ad621db821bd17877dba65b453d309bce4a1053fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-g5ppm" podUID="b06ffdf2-146c-4120-a1e6-0c4f3555105e" May 27 17:38:03.002779 containerd[1593]: time="2025-05-27T17:38:03.002674537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:38:03.931618 systemd[1]: Created slice kubepods-besteffort-podfcd8288a_3a1a_4f74_9ffc_0a9589729432.slice - libcontainer container kubepods-besteffort-podfcd8288a_3a1a_4f74_9ffc_0a9589729432.slice. May 27 17:38:03.933705 containerd[1593]: time="2025-05-27T17:38:03.933670712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-228hb,Uid:fcd8288a-3a1a-4f74-9ffc-0a9589729432,Namespace:calico-system,Attempt:0,}" May 27 17:38:03.981243 containerd[1593]: time="2025-05-27T17:38:03.981048179Z" level=error msg="Failed to destroy network for sandbox \"a0b19e9e5b589ada3911662ec7e9e6c055db05e6974c6a787ba4c16d8a129ee1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:03.983735 systemd[1]: run-netns-cni\x2dcfb6dec6\x2d93a7\x2d3c34\x2dc9e9\x2d344f13973bf7.mount: Deactivated successfully. May 27 17:38:03.985161 containerd[1593]: time="2025-05-27T17:38:03.985120158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-228hb,Uid:fcd8288a-3a1a-4f74-9ffc-0a9589729432,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0b19e9e5b589ada3911662ec7e9e6c055db05e6974c6a787ba4c16d8a129ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:03.985313 kubelet[2686]: E0527 17:38:03.985278 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0b19e9e5b589ada3911662ec7e9e6c055db05e6974c6a787ba4c16d8a129ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:38:03.985623 kubelet[2686]: E0527 17:38:03.985334 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0b19e9e5b589ada3911662ec7e9e6c055db05e6974c6a787ba4c16d8a129ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-228hb" May 27 17:38:03.985623 kubelet[2686]: E0527 17:38:03.985353 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0b19e9e5b589ada3911662ec7e9e6c055db05e6974c6a787ba4c16d8a129ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-228hb" May 27 17:38:03.985623 kubelet[2686]: E0527 17:38:03.985405 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-228hb_calico-system(fcd8288a-3a1a-4f74-9ffc-0a9589729432)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-228hb_calico-system(fcd8288a-3a1a-4f74-9ffc-0a9589729432)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0b19e9e5b589ada3911662ec7e9e6c055db05e6974c6a787ba4c16d8a129ee1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-228hb" podUID="fcd8288a-3a1a-4f74-9ffc-0a9589729432" May 27 17:38:04.886164 kubelet[2686]: I0527 17:38:04.886048 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:38:09.320208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076457706.mount: Deactivated successfully. May 27 17:38:10.098581 containerd[1593]: time="2025-05-27T17:38:10.098506089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:10.101324 containerd[1593]: time="2025-05-27T17:38:10.101280863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 17:38:10.104264 containerd[1593]: time="2025-05-27T17:38:10.104211659Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:10.107013 containerd[1593]: time="2025-05-27T17:38:10.106965173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:10.107747 containerd[1593]: time="2025-05-27T17:38:10.107705523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 7.104979128s" May 27 17:38:10.107795 containerd[1593]: time="2025-05-27T17:38:10.107750878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 17:38:10.137992 containerd[1593]: time="2025-05-27T17:38:10.137931613Z" level=info msg="CreateContainer within sandbox \"1dd10a0279f1d712934ea29fc7a021348252817f3b9d7ca8eb410a7a74486149\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 17:38:10.152489 containerd[1593]: time="2025-05-27T17:38:10.152435654Z" level=info msg="Container 10ebdc86074df58b9c00d3f41f3f3e81ec6e5f09f2abd1bf0eb8c884cf35b597: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:10.175086 containerd[1593]: time="2025-05-27T17:38:10.175005498Z" level=info msg="CreateContainer within sandbox \"1dd10a0279f1d712934ea29fc7a021348252817f3b9d7ca8eb410a7a74486149\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"10ebdc86074df58b9c00d3f41f3f3e81ec6e5f09f2abd1bf0eb8c884cf35b597\"" May 27 17:38:10.175664 containerd[1593]: time="2025-05-27T17:38:10.175627868Z" level=info msg="StartContainer for \"10ebdc86074df58b9c00d3f41f3f3e81ec6e5f09f2abd1bf0eb8c884cf35b597\"" May 27 17:38:10.177351 containerd[1593]: time="2025-05-27T17:38:10.177317542Z" level=info msg="connecting to shim 10ebdc86074df58b9c00d3f41f3f3e81ec6e5f09f2abd1bf0eb8c884cf35b597" address="unix:///run/containerd/s/5200eab89e8f1396b1a77d3292e2dd244bcdb73a8c82afb2843318e08bd62d38" protocol=ttrpc version=3 May 27 17:38:10.215289 systemd[1]: Started cri-containerd-10ebdc86074df58b9c00d3f41f3f3e81ec6e5f09f2abd1bf0eb8c884cf35b597.scope - libcontainer container 10ebdc86074df58b9c00d3f41f3f3e81ec6e5f09f2abd1bf0eb8c884cf35b597. May 27 17:38:10.281592 containerd[1593]: time="2025-05-27T17:38:10.281540248Z" level=info msg="StartContainer for \"10ebdc86074df58b9c00d3f41f3f3e81ec6e5f09f2abd1bf0eb8c884cf35b597\" returns successfully" May 27 17:38:10.395339 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 17:38:10.396247 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 17:38:10.563455 kubelet[2686]: I0527 17:38:10.563389 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94675efa-847e-427a-824d-b12d79137e81-whisker-backend-key-pair\") pod \"94675efa-847e-427a-824d-b12d79137e81\" (UID: \"94675efa-847e-427a-824d-b12d79137e81\") " May 27 17:38:10.563455 kubelet[2686]: I0527 17:38:10.563459 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvlzp\" (UniqueName: \"kubernetes.io/projected/94675efa-847e-427a-824d-b12d79137e81-kube-api-access-rvlzp\") pod \"94675efa-847e-427a-824d-b12d79137e81\" (UID: \"94675efa-847e-427a-824d-b12d79137e81\") " May 27 17:38:10.563980 kubelet[2686]: I0527 17:38:10.563482 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94675efa-847e-427a-824d-b12d79137e81-whisker-ca-bundle\") pod \"94675efa-847e-427a-824d-b12d79137e81\" (UID: \"94675efa-847e-427a-824d-b12d79137e81\") " May 27 17:38:10.566925 kubelet[2686]: I0527 17:38:10.566691 2686 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94675efa-847e-427a-824d-b12d79137e81-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "94675efa-847e-427a-824d-b12d79137e81" (UID: "94675efa-847e-427a-824d-b12d79137e81"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:38:10.571550 systemd[1]: var-lib-kubelet-pods-94675efa\x2d847e\x2d427a\x2d824d\x2db12d79137e81-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 17:38:10.572863 kubelet[2686]: I0527 17:38:10.571992 2686 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94675efa-847e-427a-824d-b12d79137e81-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "94675efa-847e-427a-824d-b12d79137e81" (UID: "94675efa-847e-427a-824d-b12d79137e81"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:38:10.572863 kubelet[2686]: I0527 17:38:10.572831 2686 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94675efa-847e-427a-824d-b12d79137e81-kube-api-access-rvlzp" (OuterVolumeSpecName: "kube-api-access-rvlzp") pod "94675efa-847e-427a-824d-b12d79137e81" (UID: "94675efa-847e-427a-824d-b12d79137e81"). InnerVolumeSpecName "kube-api-access-rvlzp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:38:10.575837 systemd[1]: var-lib-kubelet-pods-94675efa\x2d847e\x2d427a\x2d824d\x2db12d79137e81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drvlzp.mount: Deactivated successfully. May 27 17:38:10.664056 kubelet[2686]: I0527 17:38:10.663983 2686 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rvlzp\" (UniqueName: \"kubernetes.io/projected/94675efa-847e-427a-824d-b12d79137e81-kube-api-access-rvlzp\") on node \"localhost\" DevicePath \"\"" May 27 17:38:10.664056 kubelet[2686]: I0527 17:38:10.664019 2686 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94675efa-847e-427a-824d-b12d79137e81-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 17:38:10.664056 kubelet[2686]: I0527 17:38:10.664028 2686 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94675efa-847e-427a-824d-b12d79137e81-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 17:38:11.024928 systemd[1]: Removed slice kubepods-besteffort-pod94675efa_847e_427a_824d_b12d79137e81.slice - libcontainer container kubepods-besteffort-pod94675efa_847e_427a_824d_b12d79137e81.slice. May 27 17:38:11.046014 kubelet[2686]: I0527 17:38:11.045915 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dmk8f" podStartSLOduration=1.909547849 podStartE2EDuration="19.045898605s" podCreationTimestamp="2025-05-27 17:37:52 +0000 UTC" firstStartedPulling="2025-05-27 17:37:52.972180138 +0000 UTC m=+19.163739613" lastFinishedPulling="2025-05-27 17:38:10.108530884 +0000 UTC m=+36.300090369" observedRunningTime="2025-05-27 17:38:11.045708568 +0000 UTC m=+37.237268043" watchObservedRunningTime="2025-05-27 17:38:11.045898605 +0000 UTC m=+37.237458080" May 27 17:38:11.108033 systemd[1]: Created slice kubepods-besteffort-pod8474f19b_dee7_4c30_b00b_6e2228a0c605.slice - libcontainer container kubepods-besteffort-pod8474f19b_dee7_4c30_b00b_6e2228a0c605.slice. May 27 17:38:11.166971 kubelet[2686]: I0527 17:38:11.166896 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnsmc\" (UniqueName: \"kubernetes.io/projected/8474f19b-dee7-4c30-b00b-6e2228a0c605-kube-api-access-wnsmc\") pod \"whisker-854ff896c6-s24tx\" (UID: \"8474f19b-dee7-4c30-b00b-6e2228a0c605\") " pod="calico-system/whisker-854ff896c6-s24tx" May 27 17:38:11.166971 kubelet[2686]: I0527 17:38:11.166958 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8474f19b-dee7-4c30-b00b-6e2228a0c605-whisker-backend-key-pair\") pod \"whisker-854ff896c6-s24tx\" (UID: \"8474f19b-dee7-4c30-b00b-6e2228a0c605\") " pod="calico-system/whisker-854ff896c6-s24tx" May 27 17:38:11.166971 kubelet[2686]: I0527 17:38:11.166979 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8474f19b-dee7-4c30-b00b-6e2228a0c605-whisker-ca-bundle\") pod \"whisker-854ff896c6-s24tx\" (UID: \"8474f19b-dee7-4c30-b00b-6e2228a0c605\") " pod="calico-system/whisker-854ff896c6-s24tx" May 27 17:38:11.411981 containerd[1593]: time="2025-05-27T17:38:11.411839195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-854ff896c6-s24tx,Uid:8474f19b-dee7-4c30-b00b-6e2228a0c605,Namespace:calico-system,Attempt:0,}" May 27 17:38:11.565634 systemd-networkd[1491]: cali1b02ed0220e: Link UP May 27 17:38:11.565908 systemd-networkd[1491]: cali1b02ed0220e: Gained carrier May 27 17:38:11.579635 containerd[1593]: 2025-05-27 17:38:11.439 [INFO][3822] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:38:11.579635 containerd[1593]: 2025-05-27 17:38:11.458 [INFO][3822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--854ff896c6--s24tx-eth0 whisker-854ff896c6- calico-system 8474f19b-dee7-4c30-b00b-6e2228a0c605 887 0 2025-05-27 17:38:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:854ff896c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-854ff896c6-s24tx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1b02ed0220e [] [] }} ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Namespace="calico-system" Pod="whisker-854ff896c6-s24tx" WorkloadEndpoint="localhost-k8s-whisker--854ff896c6--s24tx-" May 27 17:38:11.579635 containerd[1593]: 2025-05-27 17:38:11.458 [INFO][3822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Namespace="calico-system" Pod="whisker-854ff896c6-s24tx" WorkloadEndpoint="localhost-k8s-whisker--854ff896c6--s24tx-eth0" May 27 17:38:11.579635 containerd[1593]: 2025-05-27 17:38:11.523 [INFO][3837] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" HandleID="k8s-pod-network.f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Workload="localhost-k8s-whisker--854ff896c6--s24tx-eth0" May 27 17:38:11.579905 containerd[1593]: 2025-05-27 17:38:11.523 [INFO][3837] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" HandleID="k8s-pod-network.f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Workload="localhost-k8s-whisker--854ff896c6--s24tx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000351340), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-854ff896c6-s24tx", "timestamp":"2025-05-27 17:38:11.523182805 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:38:11.579905 containerd[1593]: 2025-05-27 17:38:11.523 [INFO][3837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:38:11.579905 containerd[1593]: 2025-05-27 17:38:11.524 [INFO][3837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:38:11.579905 containerd[1593]: 2025-05-27 17:38:11.524 [INFO][3837] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:38:11.579905 containerd[1593]: 2025-05-27 17:38:11.533 [INFO][3837] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" host="localhost" May 27 17:38:11.579905 containerd[1593]: 2025-05-27 17:38:11.539 [INFO][3837] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:38:11.579905 containerd[1593]: 2025-05-27 17:38:11.543 [INFO][3837] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:38:11.579905 containerd[1593]: 2025-05-27 17:38:11.544 [INFO][3837] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:38:11.579905 containerd[1593]: 2025-05-27 17:38:11.546 [INFO][3837] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:38:11.579905 containerd[1593]: 2025-05-27 17:38:11.546 [INFO][3837] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" host="localhost" May 27 17:38:11.580256 containerd[1593]: 2025-05-27 17:38:11.547 [INFO][3837] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd May 27 17:38:11.580256 containerd[1593]: 2025-05-27 17:38:11.550 [INFO][3837] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" host="localhost" May 27 17:38:11.580256 containerd[1593]: 2025-05-27 17:38:11.555 [INFO][3837] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" host="localhost" May 27 17:38:11.580256 containerd[1593]: 2025-05-27 17:38:11.555 [INFO][3837] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" host="localhost" May 27 17:38:11.580256 containerd[1593]: 2025-05-27 17:38:11.555 [INFO][3837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:38:11.580256 containerd[1593]: 2025-05-27 17:38:11.555 [INFO][3837] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" HandleID="k8s-pod-network.f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Workload="localhost-k8s-whisker--854ff896c6--s24tx-eth0" May 27 17:38:11.580377 containerd[1593]: 2025-05-27 17:38:11.558 [INFO][3822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Namespace="calico-system" Pod="whisker-854ff896c6-s24tx" WorkloadEndpoint="localhost-k8s-whisker--854ff896c6--s24tx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--854ff896c6--s24tx-eth0", GenerateName:"whisker-854ff896c6-", Namespace:"calico-system", SelfLink:"", UID:"8474f19b-dee7-4c30-b00b-6e2228a0c605", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"854ff896c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-854ff896c6-s24tx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1b02ed0220e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:11.580377 containerd[1593]: 2025-05-27 17:38:11.558 [INFO][3822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Namespace="calico-system" Pod="whisker-854ff896c6-s24tx" WorkloadEndpoint="localhost-k8s-whisker--854ff896c6--s24tx-eth0" May 27 17:38:11.580448 containerd[1593]: 2025-05-27 17:38:11.558 [INFO][3822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b02ed0220e ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Namespace="calico-system" Pod="whisker-854ff896c6-s24tx" WorkloadEndpoint="localhost-k8s-whisker--854ff896c6--s24tx-eth0" May 27 17:38:11.580448 containerd[1593]: 2025-05-27 17:38:11.566 [INFO][3822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Namespace="calico-system" Pod="whisker-854ff896c6-s24tx" WorkloadEndpoint="localhost-k8s-whisker--854ff896c6--s24tx-eth0" May 27 17:38:11.580496 containerd[1593]: 2025-05-27 17:38:11.566 [INFO][3822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Namespace="calico-system" Pod="whisker-854ff896c6-s24tx" WorkloadEndpoint="localhost-k8s-whisker--854ff896c6--s24tx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--854ff896c6--s24tx-eth0", GenerateName:"whisker-854ff896c6-", Namespace:"calico-system", SelfLink:"", UID:"8474f19b-dee7-4c30-b00b-6e2228a0c605", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"854ff896c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd", Pod:"whisker-854ff896c6-s24tx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1b02ed0220e", MAC:"22:b3:ff:fe:dc:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:11.580549 containerd[1593]: 2025-05-27 17:38:11.574 [INFO][3822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" Namespace="calico-system" Pod="whisker-854ff896c6-s24tx" WorkloadEndpoint="localhost-k8s-whisker--854ff896c6--s24tx-eth0" May 27 17:38:11.627743 containerd[1593]: time="2025-05-27T17:38:11.627683782Z" level=info msg="connecting to shim f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd" address="unix:///run/containerd/s/5356889ecbba779c999b7c1ffdc7321c83268bc3110c2faef2cfd9b2c2e56d40" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:11.668236 systemd[1]: Started cri-containerd-f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd.scope - libcontainer container f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd. May 27 17:38:11.681680 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:38:11.712316 containerd[1593]: time="2025-05-27T17:38:11.712254728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-854ff896c6-s24tx,Uid:8474f19b-dee7-4c30-b00b-6e2228a0c605,Namespace:calico-system,Attempt:0,} returns sandbox id \"f63da30ad65dbf658942b61ddcd35b9129c3028bef6d877df15b750e2e7351dd\"" May 27 17:38:11.714001 containerd[1593]: time="2025-05-27T17:38:11.713951165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:38:11.916306 kubelet[2686]: I0527 17:38:11.916016 2686 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94675efa-847e-427a-824d-b12d79137e81" path="/var/lib/kubelet/pods/94675efa-847e-427a-824d-b12d79137e81/volumes" May 27 17:38:12.018575 containerd[1593]: time="2025-05-27T17:38:12.018510319Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:38:12.205095 containerd[1593]: time="2025-05-27T17:38:12.204989041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:38:12.210997 containerd[1593]: time="2025-05-27T17:38:12.210932636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:38:12.211242 kubelet[2686]: E0527 17:38:12.211204 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:38:12.211329 kubelet[2686]: E0527 17:38:12.211253 2686 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:38:12.213865 kubelet[2686]: E0527 17:38:12.213811 2686 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a3540ee0cc6d4d5ba3e9e8680912f34a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wnsmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-854ff896c6-s24tx_calico-system(8474f19b-dee7-4c30-b00b-6e2228a0c605): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:38:12.215879 containerd[1593]: time="2025-05-27T17:38:12.215844713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:38:12.302920 systemd-networkd[1491]: vxlan.calico: Link UP May 27 17:38:12.302931 systemd-networkd[1491]: vxlan.calico: Gained carrier May 27 17:38:12.449291 containerd[1593]: time="2025-05-27T17:38:12.449220397Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:38:12.450577 containerd[1593]: time="2025-05-27T17:38:12.450541208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:38:12.450669 containerd[1593]: time="2025-05-27T17:38:12.450584991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:38:12.450881 kubelet[2686]: E0527 17:38:12.450815 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:38:12.450881 kubelet[2686]: E0527 17:38:12.450872 2686 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:38:12.451261 kubelet[2686]: E0527 17:38:12.451007 2686 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnsmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-854ff896c6-s24tx_calico-system(8474f19b-dee7-4c30-b00b-6e2228a0c605): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:38:12.452270 kubelet[2686]: E0527 17:38:12.452217 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-854ff896c6-s24tx" podUID="8474f19b-dee7-4c30-b00b-6e2228a0c605" May 27 17:38:13.006301 systemd-networkd[1491]: cali1b02ed0220e: Gained IPv6LL May 27 17:38:13.025166 kubelet[2686]: E0527 17:38:13.025049 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-854ff896c6-s24tx" podUID="8474f19b-dee7-4c30-b00b-6e2228a0c605" May 27 17:38:13.518266 systemd-networkd[1491]: vxlan.calico: Gained IPv6LL May 27 17:38:13.913139 containerd[1593]: time="2025-05-27T17:38:13.912982493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w2htf,Uid:093120ac-0553-49ef-a766-a2f9173d2bc1,Namespace:kube-system,Attempt:0,}" May 27 17:38:13.913763 containerd[1593]: time="2025-05-27T17:38:13.913706924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6884bd4ddb-5fcc6,Uid:b005d8ad-4da9-4276-9016-e8cce20cb81b,Namespace:calico-system,Attempt:0,}" May 27 17:38:13.913812 containerd[1593]: time="2025-05-27T17:38:13.913774140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cb66b499-csx8g,Uid:76b8a2b2-e971-4c11-be31-8a11df3e321a,Namespace:calico-apiserver,Attempt:0,}" May 27 17:38:14.082870 systemd-networkd[1491]: calidf5abf3f655: Link UP May 27 17:38:14.083099 systemd-networkd[1491]: calidf5abf3f655: Gained carrier May 27 17:38:14.176696 containerd[1593]: 2025-05-27 17:38:13.969 [INFO][4097] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--w2htf-eth0 coredns-674b8bbfcf- kube-system 093120ac-0553-49ef-a766-a2f9173d2bc1 804 0 2025-05-27 17:37:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-w2htf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidf5abf3f655 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2htf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2htf-" May 27 17:38:14.176696 containerd[1593]: 2025-05-27 17:38:13.970 [INFO][4097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2htf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2htf-eth0" May 27 17:38:14.176696 containerd[1593]: 2025-05-27 17:38:14.011 [INFO][4141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" HandleID="k8s-pod-network.5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Workload="localhost-k8s-coredns--674b8bbfcf--w2htf-eth0" May 27 17:38:14.176895 containerd[1593]: 2025-05-27 17:38:14.012 [INFO][4141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" HandleID="k8s-pod-network.5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Workload="localhost-k8s-coredns--674b8bbfcf--w2htf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-w2htf", "timestamp":"2025-05-27 17:38:14.011961615 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:38:14.176895 containerd[1593]: 2025-05-27 17:38:14.012 [INFO][4141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:38:14.176895 containerd[1593]: 2025-05-27 17:38:14.012 [INFO][4141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:38:14.176895 containerd[1593]: 2025-05-27 17:38:14.012 [INFO][4141] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:38:14.176895 containerd[1593]: 2025-05-27 17:38:14.020 [INFO][4141] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" host="localhost" May 27 17:38:14.176895 containerd[1593]: 2025-05-27 17:38:14.026 [INFO][4141] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:38:14.176895 containerd[1593]: 2025-05-27 17:38:14.031 [INFO][4141] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:38:14.176895 containerd[1593]: 2025-05-27 17:38:14.033 [INFO][4141] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:38:14.176895 containerd[1593]: 2025-05-27 17:38:14.035 [INFO][4141] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:38:14.176895 containerd[1593]: 2025-05-27 17:38:14.035 [INFO][4141] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" host="localhost" May 27 17:38:14.177156 containerd[1593]: 2025-05-27 17:38:14.036 [INFO][4141] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba May 27 17:38:14.177156 containerd[1593]: 2025-05-27 17:38:14.065 [INFO][4141] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" host="localhost" May 27 17:38:14.177156 containerd[1593]: 2025-05-27 17:38:14.075 [INFO][4141] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" host="localhost" May 27 17:38:14.177156 containerd[1593]: 2025-05-27 17:38:14.075 [INFO][4141] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" host="localhost" May 27 17:38:14.177156 containerd[1593]: 2025-05-27 17:38:14.075 [INFO][4141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:38:14.177156 containerd[1593]: 2025-05-27 17:38:14.075 [INFO][4141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" HandleID="k8s-pod-network.5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Workload="localhost-k8s-coredns--674b8bbfcf--w2htf-eth0" May 27 17:38:14.177810 containerd[1593]: 2025-05-27 17:38:14.078 [INFO][4097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2htf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2htf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--w2htf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"093120ac-0553-49ef-a766-a2f9173d2bc1", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-w2htf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf5abf3f655", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:14.177944 containerd[1593]: 2025-05-27 17:38:14.078 [INFO][4097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2htf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2htf-eth0" May 27 17:38:14.177944 containerd[1593]: 2025-05-27 17:38:14.078 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf5abf3f655 ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2htf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2htf-eth0" May 27 17:38:14.177944 containerd[1593]: 2025-05-27 17:38:14.082 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2htf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2htf-eth0" May 27 17:38:14.178038 containerd[1593]: 2025-05-27 17:38:14.083 [INFO][4097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2htf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2htf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--w2htf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"093120ac-0553-49ef-a766-a2f9173d2bc1", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba", Pod:"coredns-674b8bbfcf-w2htf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf5abf3f655", MAC:"fe:29:d2:42:11:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:14.178038 containerd[1593]: 2025-05-27 17:38:14.170 [INFO][4097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2htf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2htf-eth0" May 27 17:38:14.250450 systemd-networkd[1491]: cali4b1135e7b1d: Link UP May 27 17:38:14.251482 systemd-networkd[1491]: cali4b1135e7b1d: Gained carrier May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:13.974 [INFO][4109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0 calico-apiserver-54cb66b499- calico-apiserver 76b8a2b2-e971-4c11-be31-8a11df3e321a 805 0 2025-05-27 17:37:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54cb66b499 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54cb66b499-csx8g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4b1135e7b1d [] [] }} ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-csx8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--csx8g-" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:13.974 [INFO][4109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-csx8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.012 [INFO][4147] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" HandleID="k8s-pod-network.8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Workload="localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.013 [INFO][4147] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" HandleID="k8s-pod-network.8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Workload="localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54cb66b499-csx8g", "timestamp":"2025-05-27 17:38:14.0125928 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.013 [INFO][4147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.075 [INFO][4147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.075 [INFO][4147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.169 [INFO][4147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" host="localhost" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.180 [INFO][4147] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.189 [INFO][4147] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.191 [INFO][4147] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.193 [INFO][4147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.193 [INFO][4147] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" host="localhost" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.195 [INFO][4147] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.236 [INFO][4147] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" host="localhost" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.244 [INFO][4147] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" host="localhost" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.244 [INFO][4147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" host="localhost" May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.244 [INFO][4147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:38:14.347925 containerd[1593]: 2025-05-27 17:38:14.244 [INFO][4147] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" HandleID="k8s-pod-network.8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Workload="localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0" May 27 17:38:14.348809 containerd[1593]: 2025-05-27 17:38:14.247 [INFO][4109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-csx8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0", GenerateName:"calico-apiserver-54cb66b499-", Namespace:"calico-apiserver", SelfLink:"", UID:"76b8a2b2-e971-4c11-be31-8a11df3e321a", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cb66b499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54cb66b499-csx8g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b1135e7b1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:14.348809 containerd[1593]: 2025-05-27 17:38:14.247 [INFO][4109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-csx8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0" May 27 17:38:14.348809 containerd[1593]: 2025-05-27 17:38:14.247 [INFO][4109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b1135e7b1d ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-csx8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0" May 27 17:38:14.348809 containerd[1593]: 2025-05-27 17:38:14.252 [INFO][4109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-csx8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0" May 27 17:38:14.348809 containerd[1593]: 2025-05-27 17:38:14.254 [INFO][4109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-csx8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0", GenerateName:"calico-apiserver-54cb66b499-", Namespace:"calico-apiserver", SelfLink:"", UID:"76b8a2b2-e971-4c11-be31-8a11df3e321a", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cb66b499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe", Pod:"calico-apiserver-54cb66b499-csx8g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b1135e7b1d", MAC:"ba:64:73:a3:ce:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:14.348809 containerd[1593]: 2025-05-27 17:38:14.344 [INFO][4109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-csx8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--csx8g-eth0" May 27 17:38:14.415446 systemd-networkd[1491]: calif4221182bf8: Link UP May 27 17:38:14.416244 systemd-networkd[1491]: calif4221182bf8: Gained carrier May 27 17:38:14.440206 containerd[1593]: time="2025-05-27T17:38:14.439148267Z" level=info msg="connecting to shim 5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba" address="unix:///run/containerd/s/01fd04b83692723a88e11a148aefcefe02d1b2192e1bad5060d90fad9c817ef9" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:13.982 [INFO][4121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0 calico-kube-controllers-6884bd4ddb- calico-system b005d8ad-4da9-4276-9016-e8cce20cb81b 802 0 2025-05-27 17:37:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6884bd4ddb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6884bd4ddb-5fcc6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif4221182bf8 [] [] }} ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Namespace="calico-system" Pod="calico-kube-controllers-6884bd4ddb-5fcc6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:13.982 [INFO][4121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Namespace="calico-system" Pod="calico-kube-controllers-6884bd4ddb-5fcc6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.020 [INFO][4154] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" HandleID="k8s-pod-network.9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Workload="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.021 [INFO][4154] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" HandleID="k8s-pod-network.9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Workload="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6884bd4ddb-5fcc6", "timestamp":"2025-05-27 17:38:14.020870346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.021 [INFO][4154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.244 [INFO][4154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.244 [INFO][4154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.252 [INFO][4154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" host="localhost" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.345 [INFO][4154] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.355 [INFO][4154] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.357 [INFO][4154] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.359 [INFO][4154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.359 [INFO][4154] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" host="localhost" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.361 [INFO][4154] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.381 [INFO][4154] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" host="localhost" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.407 [INFO][4154] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" host="localhost" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.407 [INFO][4154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" host="localhost" May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.407 [INFO][4154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:38:14.443351 containerd[1593]: 2025-05-27 17:38:14.407 [INFO][4154] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" HandleID="k8s-pod-network.9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Workload="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0" May 27 17:38:14.443821 containerd[1593]: 2025-05-27 17:38:14.411 [INFO][4121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Namespace="calico-system" Pod="calico-kube-controllers-6884bd4ddb-5fcc6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0", GenerateName:"calico-kube-controllers-6884bd4ddb-", Namespace:"calico-system", SelfLink:"", UID:"b005d8ad-4da9-4276-9016-e8cce20cb81b", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6884bd4ddb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6884bd4ddb-5fcc6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4221182bf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:14.443821 containerd[1593]: 2025-05-27 17:38:14.411 [INFO][4121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Namespace="calico-system" Pod="calico-kube-controllers-6884bd4ddb-5fcc6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0" May 27 17:38:14.443821 containerd[1593]: 2025-05-27 17:38:14.412 [INFO][4121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4221182bf8 ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Namespace="calico-system" Pod="calico-kube-controllers-6884bd4ddb-5fcc6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0" May 27 17:38:14.443821 containerd[1593]: 2025-05-27 17:38:14.416 [INFO][4121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Namespace="calico-system" Pod="calico-kube-controllers-6884bd4ddb-5fcc6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0" May 27 17:38:14.443821 containerd[1593]: 2025-05-27 17:38:14.417 [INFO][4121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Namespace="calico-system" Pod="calico-kube-controllers-6884bd4ddb-5fcc6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0", GenerateName:"calico-kube-controllers-6884bd4ddb-", Namespace:"calico-system", SelfLink:"", UID:"b005d8ad-4da9-4276-9016-e8cce20cb81b", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6884bd4ddb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c", Pod:"calico-kube-controllers-6884bd4ddb-5fcc6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4221182bf8", MAC:"2a:cd:47:cc:2d:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:14.443821 containerd[1593]: 2025-05-27 17:38:14.435 [INFO][4121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" Namespace="calico-system" Pod="calico-kube-controllers-6884bd4ddb-5fcc6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884bd4ddb--5fcc6-eth0" May 27 17:38:14.474741 containerd[1593]: time="2025-05-27T17:38:14.474666622Z" level=info msg="connecting to shim 8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe" address="unix:///run/containerd/s/9f68a7dd1e0c9bdb62624a1fc944275c88d5b1f1620b5972f793e447f6c7b9c4" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:14.478362 systemd[1]: Started cri-containerd-5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba.scope - libcontainer container 5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba. May 27 17:38:14.479237 containerd[1593]: time="2025-05-27T17:38:14.479171342Z" level=info msg="connecting to shim 9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c" address="unix:///run/containerd/s/ed37aee3ed736e2fae87f00f907afa8d3e4dff94bd4bbcf304bb00ccc653f841" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:14.499260 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:38:14.518311 systemd[1]: Started cri-containerd-8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe.scope - libcontainer container 8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe. May 27 17:38:14.524658 systemd[1]: Started cri-containerd-9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c.scope - libcontainer container 9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c. May 27 17:38:14.540681 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:38:14.542622 containerd[1593]: time="2025-05-27T17:38:14.542573224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w2htf,Uid:093120ac-0553-49ef-a766-a2f9173d2bc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba\"" May 27 17:38:14.546614 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:38:14.553237 containerd[1593]: time="2025-05-27T17:38:14.553190763Z" level=info msg="CreateContainer within sandbox \"5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:38:14.576429 containerd[1593]: time="2025-05-27T17:38:14.576265859Z" level=info msg="Container eb0db24ff53860d19d553ddc5f22e5b47b981c58fc9f5c9c89be7ddea611877e: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:14.588016 containerd[1593]: time="2025-05-27T17:38:14.587940093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cb66b499-csx8g,Uid:76b8a2b2-e971-4c11-be31-8a11df3e321a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe\"" May 27 17:38:14.590583 containerd[1593]: time="2025-05-27T17:38:14.590529166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:38:14.592023 containerd[1593]: time="2025-05-27T17:38:14.591974951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6884bd4ddb-5fcc6,Uid:b005d8ad-4da9-4276-9016-e8cce20cb81b,Namespace:calico-system,Attempt:0,} returns sandbox id \"9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c\"" May 27 17:38:14.593288 containerd[1593]: time="2025-05-27T17:38:14.593238505Z" level=info msg="CreateContainer within sandbox \"5734aab3893851432c52247d3d64fd82b20a1e7023cad4accb47e245a413f7ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb0db24ff53860d19d553ddc5f22e5b47b981c58fc9f5c9c89be7ddea611877e\"" May 27 17:38:14.593724 containerd[1593]: time="2025-05-27T17:38:14.593685023Z" level=info msg="StartContainer for \"eb0db24ff53860d19d553ddc5f22e5b47b981c58fc9f5c9c89be7ddea611877e\"" May 27 17:38:14.594732 containerd[1593]: time="2025-05-27T17:38:14.594687866Z" level=info msg="connecting to shim eb0db24ff53860d19d553ddc5f22e5b47b981c58fc9f5c9c89be7ddea611877e" address="unix:///run/containerd/s/01fd04b83692723a88e11a148aefcefe02d1b2192e1bad5060d90fad9c817ef9" protocol=ttrpc version=3 May 27 17:38:14.628401 systemd[1]: Started cri-containerd-eb0db24ff53860d19d553ddc5f22e5b47b981c58fc9f5c9c89be7ddea611877e.scope - libcontainer container eb0db24ff53860d19d553ddc5f22e5b47b981c58fc9f5c9c89be7ddea611877e. May 27 17:38:14.671554 containerd[1593]: time="2025-05-27T17:38:14.671495524Z" level=info msg="StartContainer for \"eb0db24ff53860d19d553ddc5f22e5b47b981c58fc9f5c9c89be7ddea611877e\" returns successfully" May 27 17:38:14.919747 containerd[1593]: time="2025-05-27T17:38:14.919701544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-228hb,Uid:fcd8288a-3a1a-4f74-9ffc-0a9589729432,Namespace:calico-system,Attempt:0,}" May 27 17:38:15.052587 systemd-networkd[1491]: cali66b19c55286: Link UP May 27 17:38:15.052854 systemd-networkd[1491]: cali66b19c55286: Gained carrier May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:14.971 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--228hb-eth0 csi-node-driver- calico-system fcd8288a-3a1a-4f74-9ffc-0a9589729432 692 0 2025-05-27 17:37:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-228hb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali66b19c55286 [] [] }} ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Namespace="calico-system" Pod="csi-node-driver-228hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--228hb-" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:14.971 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Namespace="calico-system" Pod="csi-node-driver-228hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--228hb-eth0" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.002 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" HandleID="k8s-pod-network.46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Workload="localhost-k8s-csi--node--driver--228hb-eth0" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.002 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" HandleID="k8s-pod-network.46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Workload="localhost-k8s-csi--node--driver--228hb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000483700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-228hb", "timestamp":"2025-05-27 17:38:15.002501355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.002 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.002 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.002 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.011 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" host="localhost" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.017 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.022 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.024 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.026 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.026 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" host="localhost" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.027 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30 May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.033 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" host="localhost" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.043 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" host="localhost" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.043 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" host="localhost" May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.044 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:38:15.072014 containerd[1593]: 2025-05-27 17:38:15.044 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" HandleID="k8s-pod-network.46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Workload="localhost-k8s-csi--node--driver--228hb-eth0" May 27 17:38:15.072894 containerd[1593]: 2025-05-27 17:38:15.048 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Namespace="calico-system" Pod="csi-node-driver-228hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--228hb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--228hb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fcd8288a-3a1a-4f74-9ffc-0a9589729432", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-228hb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66b19c55286", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:15.072894 containerd[1593]: 2025-05-27 17:38:15.048 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Namespace="calico-system" Pod="csi-node-driver-228hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--228hb-eth0" May 27 17:38:15.072894 containerd[1593]: 2025-05-27 17:38:15.048 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66b19c55286 ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Namespace="calico-system" Pod="csi-node-driver-228hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--228hb-eth0" May 27 17:38:15.072894 containerd[1593]: 2025-05-27 17:38:15.051 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Namespace="calico-system" Pod="csi-node-driver-228hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--228hb-eth0" May 27 17:38:15.072894 containerd[1593]: 2025-05-27 17:38:15.052 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Namespace="calico-system" Pod="csi-node-driver-228hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--228hb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--228hb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fcd8288a-3a1a-4f74-9ffc-0a9589729432", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30", Pod:"csi-node-driver-228hb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66b19c55286", MAC:"4e:33:1f:37:e1:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:15.072894 containerd[1593]: 2025-05-27 17:38:15.067 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" Namespace="calico-system" Pod="csi-node-driver-228hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--228hb-eth0" May 27 17:38:15.085099 kubelet[2686]: I0527 17:38:15.084344 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w2htf" podStartSLOduration=35.084326452 podStartE2EDuration="35.084326452s" podCreationTimestamp="2025-05-27 17:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:38:15.083593826 +0000 UTC m=+41.275153311" watchObservedRunningTime="2025-05-27 17:38:15.084326452 +0000 UTC m=+41.275885957" May 27 17:38:15.114303 containerd[1593]: time="2025-05-27T17:38:15.114248982Z" level=info msg="connecting to shim 46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30" address="unix:///run/containerd/s/be33687264f0d9ff8fe8855883325580eea9ec9c56a30a2d8a8d91765b6acfbf" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:15.182250 systemd[1]: Started cri-containerd-46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30.scope - libcontainer container 46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30. May 27 17:38:15.197484 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:38:15.213684 containerd[1593]: time="2025-05-27T17:38:15.213640055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-228hb,Uid:fcd8288a-3a1a-4f74-9ffc-0a9589729432,Namespace:calico-system,Attempt:0,} returns sandbox id \"46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30\"" May 27 17:38:15.886279 systemd-networkd[1491]: calidf5abf3f655: Gained IPv6LL May 27 17:38:15.910514 containerd[1593]: time="2025-05-27T17:38:15.910473348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cb66b499-cvk4h,Uid:ea297b7e-9a74-4e9e-a08f-ee2c202087d1,Namespace:calico-apiserver,Attempt:0,}" May 27 17:38:16.038314 systemd-networkd[1491]: cali9d602f046f1: Link UP May 27 17:38:16.039202 systemd-networkd[1491]: cali9d602f046f1: Gained carrier May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:15.978 [INFO][4444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0 calico-apiserver-54cb66b499- calico-apiserver ea297b7e-9a74-4e9e-a08f-ee2c202087d1 806 0 2025-05-27 17:37:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54cb66b499 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54cb66b499-cvk4h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d602f046f1 [] [] }} ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-cvk4h" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:15.978 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-cvk4h" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.003 [INFO][4459] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" HandleID="k8s-pod-network.7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Workload="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.003 [INFO][4459] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" HandleID="k8s-pod-network.7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Workload="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54cb66b499-cvk4h", "timestamp":"2025-05-27 17:38:16.003318338 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.003 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.003 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.003 [INFO][4459] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.010 [INFO][4459] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" host="localhost" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.015 [INFO][4459] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.019 [INFO][4459] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.020 [INFO][4459] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.022 [INFO][4459] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.022 [INFO][4459] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" host="localhost" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.023 [INFO][4459] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7 May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.026 [INFO][4459] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" host="localhost" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.031 [INFO][4459] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" host="localhost" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.031 [INFO][4459] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" host="localhost" May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.031 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:38:16.055524 containerd[1593]: 2025-05-27 17:38:16.031 [INFO][4459] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" HandleID="k8s-pod-network.7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Workload="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0" May 27 17:38:16.056572 containerd[1593]: 2025-05-27 17:38:16.034 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-cvk4h" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0", GenerateName:"calico-apiserver-54cb66b499-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea297b7e-9a74-4e9e-a08f-ee2c202087d1", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cb66b499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54cb66b499-cvk4h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d602f046f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:16.056572 containerd[1593]: 2025-05-27 17:38:16.035 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-cvk4h" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0" May 27 17:38:16.056572 containerd[1593]: 2025-05-27 17:38:16.035 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d602f046f1 ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-cvk4h" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0" May 27 17:38:16.056572 containerd[1593]: 2025-05-27 17:38:16.037 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-cvk4h" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0" May 27 17:38:16.056572 containerd[1593]: 2025-05-27 17:38:16.042 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-cvk4h" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0", GenerateName:"calico-apiserver-54cb66b499-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea297b7e-9a74-4e9e-a08f-ee2c202087d1", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cb66b499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7", Pod:"calico-apiserver-54cb66b499-cvk4h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d602f046f1", MAC:"6e:32:92:98:57:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:16.056572 containerd[1593]: 2025-05-27 17:38:16.052 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" Namespace="calico-apiserver" Pod="calico-apiserver-54cb66b499-cvk4h" WorkloadEndpoint="localhost-k8s-calico--apiserver--54cb66b499--cvk4h-eth0" May 27 17:38:16.078200 systemd-networkd[1491]: cali4b1135e7b1d: Gained IPv6LL May 27 17:38:16.082552 containerd[1593]: time="2025-05-27T17:38:16.082434997Z" level=info msg="connecting to shim 7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7" address="unix:///run/containerd/s/34b98679e481891ff8da3a02bc9ee19c4e4a6b65d873d6cc131cf3613b19639c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:16.132351 systemd[1]: Started cri-containerd-7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7.scope - libcontainer container 7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7. May 27 17:38:16.151179 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:38:16.206313 systemd-networkd[1491]: cali66b19c55286: Gained IPv6LL May 27 17:38:16.212251 containerd[1593]: time="2025-05-27T17:38:16.212214054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cb66b499-cvk4h,Uid:ea297b7e-9a74-4e9e-a08f-ee2c202087d1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7\"" May 27 17:38:16.463236 systemd-networkd[1491]: calif4221182bf8: Gained IPv6LL May 27 17:38:16.910407 containerd[1593]: time="2025-05-27T17:38:16.910256546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-zdrnc,Uid:d27d1841-eda2-4baf-a093-ef4da86ab409,Namespace:calico-system,Attempt:0,}" May 27 17:38:16.910407 containerd[1593]: time="2025-05-27T17:38:16.910398082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g5ppm,Uid:b06ffdf2-146c-4120-a1e6-0c4f3555105e,Namespace:kube-system,Attempt:0,}" May 27 17:38:17.350100 containerd[1593]: time="2025-05-27T17:38:17.350020165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:17.350781 containerd[1593]: time="2025-05-27T17:38:17.350732433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 17:38:17.352311 containerd[1593]: time="2025-05-27T17:38:17.352268456Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:17.356108 containerd[1593]: time="2025-05-27T17:38:17.355624747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:17.356291 containerd[1593]: time="2025-05-27T17:38:17.356012636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 2.765437203s" May 27 17:38:17.356413 containerd[1593]: time="2025-05-27T17:38:17.356374535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:38:17.358791 containerd[1593]: time="2025-05-27T17:38:17.358759854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 17:38:17.363868 containerd[1593]: time="2025-05-27T17:38:17.363658011Z" level=info msg="CreateContainer within sandbox \"8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:38:17.376898 containerd[1593]: time="2025-05-27T17:38:17.376836634Z" level=info msg="Container 38b2ba29b5f0e6b6a205c81cdb0b9ea88f079d9bbcfea9fcf2a4af2dd0c65d31: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:17.381333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809360363.mount: Deactivated successfully. May 27 17:38:17.396284 containerd[1593]: time="2025-05-27T17:38:17.396166798Z" level=info msg="CreateContainer within sandbox \"8902db2bae6b21a292b183d33253bcc3ff026d86a15e76806cfe0970159cb7fe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"38b2ba29b5f0e6b6a205c81cdb0b9ea88f079d9bbcfea9fcf2a4af2dd0c65d31\"" May 27 17:38:17.397252 containerd[1593]: time="2025-05-27T17:38:17.397184779Z" level=info msg="StartContainer for \"38b2ba29b5f0e6b6a205c81cdb0b9ea88f079d9bbcfea9fcf2a4af2dd0c65d31\"" May 27 17:38:17.399974 containerd[1593]: time="2025-05-27T17:38:17.399840205Z" level=info msg="connecting to shim 38b2ba29b5f0e6b6a205c81cdb0b9ea88f079d9bbcfea9fcf2a4af2dd0c65d31" address="unix:///run/containerd/s/9f68a7dd1e0c9bdb62624a1fc944275c88d5b1f1620b5972f793e447f6c7b9c4" protocol=ttrpc version=3 May 27 17:38:17.438327 systemd[1]: Started cri-containerd-38b2ba29b5f0e6b6a205c81cdb0b9ea88f079d9bbcfea9fcf2a4af2dd0c65d31.scope - libcontainer container 38b2ba29b5f0e6b6a205c81cdb0b9ea88f079d9bbcfea9fcf2a4af2dd0c65d31. May 27 17:38:17.462168 systemd-networkd[1491]: cali3d4af6bcc5a: Link UP May 27 17:38:17.462413 systemd-networkd[1491]: cali3d4af6bcc5a: Gained carrier May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.372 [INFO][4524] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0 goldmane-78d55f7ddc- calico-system d27d1841-eda2-4baf-a093-ef4da86ab409 808 0 2025-05-27 17:37:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-zdrnc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3d4af6bcc5a [] [] }} ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zdrnc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--zdrnc-" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.372 [INFO][4524] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zdrnc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.405 [INFO][4559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" HandleID="k8s-pod-network.1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Workload="localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.405 [INFO][4559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" HandleID="k8s-pod-network.1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Workload="localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001396a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-zdrnc", "timestamp":"2025-05-27 17:38:17.405386239 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.405 [INFO][4559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.405 [INFO][4559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.405 [INFO][4559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.413 [INFO][4559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" host="localhost" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.424 [INFO][4559] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.430 [INFO][4559] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.433 [INFO][4559] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.436 [INFO][4559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.436 [INFO][4559] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" host="localhost" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.439 [INFO][4559] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7 May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.444 [INFO][4559] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" host="localhost" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.451 [INFO][4559] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" host="localhost" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.452 [INFO][4559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" host="localhost" May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.452 [INFO][4559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:38:17.481957 containerd[1593]: 2025-05-27 17:38:17.452 [INFO][4559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" HandleID="k8s-pod-network.1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Workload="localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0" May 27 17:38:17.482779 containerd[1593]: 2025-05-27 17:38:17.456 [INFO][4524] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zdrnc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"d27d1841-eda2-4baf-a093-ef4da86ab409", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-zdrnc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d4af6bcc5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:17.482779 containerd[1593]: 2025-05-27 17:38:17.457 [INFO][4524] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zdrnc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0" May 27 17:38:17.482779 containerd[1593]: 2025-05-27 17:38:17.457 [INFO][4524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d4af6bcc5a ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zdrnc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0" May 27 17:38:17.482779 containerd[1593]: 2025-05-27 17:38:17.461 [INFO][4524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zdrnc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0" May 27 17:38:17.482779 containerd[1593]: 2025-05-27 17:38:17.463 [INFO][4524] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zdrnc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"d27d1841-eda2-4baf-a093-ef4da86ab409", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7", Pod:"goldmane-78d55f7ddc-zdrnc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d4af6bcc5a", MAC:"4a:0e:f4:28:ea:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:17.482779 containerd[1593]: 2025-05-27 17:38:17.478 [INFO][4524] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zdrnc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--zdrnc-eth0" May 27 17:38:17.572869 containerd[1593]: time="2025-05-27T17:38:17.572810669Z" level=info msg="StartContainer for \"38b2ba29b5f0e6b6a205c81cdb0b9ea88f079d9bbcfea9fcf2a4af2dd0c65d31\" returns successfully" May 27 17:38:17.597267 systemd-networkd[1491]: calib6924d837b9: Link UP May 27 17:38:17.599152 systemd-networkd[1491]: calib6924d837b9: Gained carrier May 27 17:38:17.601172 containerd[1593]: time="2025-05-27T17:38:17.600318837Z" level=info msg="connecting to shim 1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7" address="unix:///run/containerd/s/244c824e28837cee467b1a70d877a4d363801f0a6fb5d462b41f114d4eaab25e" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:17.618172 systemd-networkd[1491]: cali9d602f046f1: Gained IPv6LL May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.374 [INFO][4545] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0 coredns-674b8bbfcf- kube-system b06ffdf2-146c-4120-a1e6-0c4f3555105e 807 0 2025-05-27 17:37:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-g5ppm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6924d837b9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Namespace="kube-system" Pod="coredns-674b8bbfcf-g5ppm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g5ppm-" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.374 [INFO][4545] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Namespace="kube-system" Pod="coredns-674b8bbfcf-g5ppm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.421 [INFO][4561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" HandleID="k8s-pod-network.8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Workload="localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.422 [INFO][4561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" HandleID="k8s-pod-network.8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Workload="localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aed70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-g5ppm", "timestamp":"2025-05-27 17:38:17.421672177 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.422 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.452 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.452 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.514 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" host="localhost" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.521 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.531 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.533 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.535 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.535 [INFO][4561] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" host="localhost" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.537 [INFO][4561] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830 May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.567 [INFO][4561] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" host="localhost" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.578 [INFO][4561] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" host="localhost" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.578 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" host="localhost" May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.579 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:38:17.625116 containerd[1593]: 2025-05-27 17:38:17.579 [INFO][4561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" HandleID="k8s-pod-network.8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Workload="localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0" May 27 17:38:17.625688 containerd[1593]: 2025-05-27 17:38:17.593 [INFO][4545] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Namespace="kube-system" Pod="coredns-674b8bbfcf-g5ppm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b06ffdf2-146c-4120-a1e6-0c4f3555105e", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-g5ppm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6924d837b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:17.625688 containerd[1593]: 2025-05-27 17:38:17.593 [INFO][4545] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Namespace="kube-system" Pod="coredns-674b8bbfcf-g5ppm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0" May 27 17:38:17.625688 containerd[1593]: 2025-05-27 17:38:17.593 [INFO][4545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6924d837b9 ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Namespace="kube-system" Pod="coredns-674b8bbfcf-g5ppm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0" May 27 17:38:17.625688 containerd[1593]: 2025-05-27 17:38:17.600 [INFO][4545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Namespace="kube-system" Pod="coredns-674b8bbfcf-g5ppm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0" May 27 17:38:17.625688 containerd[1593]: 2025-05-27 17:38:17.601 [INFO][4545] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Namespace="kube-system" Pod="coredns-674b8bbfcf-g5ppm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b06ffdf2-146c-4120-a1e6-0c4f3555105e", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830", Pod:"coredns-674b8bbfcf-g5ppm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6924d837b9", MAC:"52:a1:c3:1d:c0:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:38:17.625688 containerd[1593]: 2025-05-27 17:38:17.612 [INFO][4545] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" Namespace="kube-system" Pod="coredns-674b8bbfcf-g5ppm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g5ppm-eth0" May 27 17:38:17.635267 systemd[1]: Started cri-containerd-1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7.scope - libcontainer container 1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7. May 27 17:38:17.652859 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:38:17.655596 containerd[1593]: time="2025-05-27T17:38:17.655367323Z" level=info msg="connecting to shim 8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830" address="unix:///run/containerd/s/fc0cc98d46c91d483aac08ecd80a9cdeef6895ec5f58fb1dcf6b8e42a46b439b" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:17.694356 systemd[1]: Started cri-containerd-8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830.scope - libcontainer container 8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830. May 27 17:38:17.704966 containerd[1593]: time="2025-05-27T17:38:17.704910615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-zdrnc,Uid:d27d1841-eda2-4baf-a093-ef4da86ab409,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f32e3f1e18548e2439d9e06975f5acbd96a822ae857a090afcba362fbdf27d7\"" May 27 17:38:17.710597 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:38:17.742674 containerd[1593]: time="2025-05-27T17:38:17.742630246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g5ppm,Uid:b06ffdf2-146c-4120-a1e6-0c4f3555105e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830\"" May 27 17:38:17.752932 containerd[1593]: time="2025-05-27T17:38:17.752875572Z" level=info msg="CreateContainer within sandbox \"8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:38:17.763296 containerd[1593]: time="2025-05-27T17:38:17.763246103Z" level=info msg="Container fdd5936bf822efc04f7c8ee3dc3ac63c8cddd516feaaf4b0e03a09601fc44f27: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:17.771714 containerd[1593]: time="2025-05-27T17:38:17.771668888Z" level=info msg="CreateContainer within sandbox \"8f28fcf7c5d9ba85a626c842eb1cf772e5024f462cfda95234057289248c8830\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdd5936bf822efc04f7c8ee3dc3ac63c8cddd516feaaf4b0e03a09601fc44f27\"" May 27 17:38:17.772213 containerd[1593]: time="2025-05-27T17:38:17.772187061Z" level=info msg="StartContainer for \"fdd5936bf822efc04f7c8ee3dc3ac63c8cddd516feaaf4b0e03a09601fc44f27\"" May 27 17:38:17.773084 containerd[1593]: time="2025-05-27T17:38:17.773036385Z" level=info msg="connecting to shim fdd5936bf822efc04f7c8ee3dc3ac63c8cddd516feaaf4b0e03a09601fc44f27" address="unix:///run/containerd/s/fc0cc98d46c91d483aac08ecd80a9cdeef6895ec5f58fb1dcf6b8e42a46b439b" protocol=ttrpc version=3 May 27 17:38:17.805280 systemd[1]: Started cri-containerd-fdd5936bf822efc04f7c8ee3dc3ac63c8cddd516feaaf4b0e03a09601fc44f27.scope - libcontainer container fdd5936bf822efc04f7c8ee3dc3ac63c8cddd516feaaf4b0e03a09601fc44f27. May 27 17:38:17.843107 containerd[1593]: time="2025-05-27T17:38:17.842993774Z" level=info msg="StartContainer for \"fdd5936bf822efc04f7c8ee3dc3ac63c8cddd516feaaf4b0e03a09601fc44f27\" returns successfully" May 27 17:38:18.298352 kubelet[2686]: I0527 17:38:18.298254 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54cb66b499-csx8g" podStartSLOduration=26.529872593 podStartE2EDuration="29.298227114s" podCreationTimestamp="2025-05-27 17:37:49 +0000 UTC" firstStartedPulling="2025-05-27 17:38:14.590164801 +0000 UTC m=+40.781724267" lastFinishedPulling="2025-05-27 17:38:17.358519323 +0000 UTC m=+43.550078788" observedRunningTime="2025-05-27 17:38:18.296383283 +0000 UTC m=+44.487942768" watchObservedRunningTime="2025-05-27 17:38:18.298227114 +0000 UTC m=+44.489786620" May 27 17:38:18.416365 kubelet[2686]: I0527 17:38:18.416199 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g5ppm" podStartSLOduration=38.41617858 podStartE2EDuration="38.41617858s" podCreationTimestamp="2025-05-27 17:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:38:18.415148426 +0000 UTC m=+44.606707901" watchObservedRunningTime="2025-05-27 17:38:18.41617858 +0000 UTC m=+44.607738055" May 27 17:38:18.775602 systemd[1]: Started sshd@7-10.0.0.25:22-10.0.0.1:58580.service - OpenSSH per-connection server daemon (10.0.0.1:58580). May 27 17:38:18.859391 sshd[4777]: Accepted publickey for core from 10.0.0.1 port 58580 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:38:18.862237 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:18.868697 systemd-logind[1577]: New session 8 of user core. May 27 17:38:18.880310 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:38:18.895321 systemd-networkd[1491]: cali3d4af6bcc5a: Gained IPv6LL May 27 17:38:19.255619 sshd[4780]: Connection closed by 10.0.0.1 port 58580 May 27 17:38:19.256567 sshd-session[4777]: pam_unix(sshd:session): session closed for user core May 27 17:38:19.262134 systemd[1]: sshd@7-10.0.0.25:22-10.0.0.1:58580.service: Deactivated successfully. May 27 17:38:19.265998 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:38:19.269768 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. May 27 17:38:19.271866 systemd-logind[1577]: Removed session 8. May 27 17:38:19.534257 systemd-networkd[1491]: calib6924d837b9: Gained IPv6LL May 27 17:38:19.883174 containerd[1593]: time="2025-05-27T17:38:19.883024481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:19.884538 containerd[1593]: time="2025-05-27T17:38:19.884508958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 17:38:19.886762 containerd[1593]: time="2025-05-27T17:38:19.886731020Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:19.889321 containerd[1593]: time="2025-05-27T17:38:19.889269586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:19.889795 containerd[1593]: time="2025-05-27T17:38:19.889750067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 2.530955809s" May 27 17:38:19.889836 containerd[1593]: time="2025-05-27T17:38:19.889793960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 17:38:19.890748 containerd[1593]: time="2025-05-27T17:38:19.890714458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 17:38:19.907098 containerd[1593]: time="2025-05-27T17:38:19.907020269Z" level=info msg="CreateContainer within sandbox \"9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 17:38:19.918465 containerd[1593]: time="2025-05-27T17:38:19.918417244Z" level=info msg="Container e1298d36466864c3fc6c918c51555ee64e9e470e30b863f6237224e728183063: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:19.929655 containerd[1593]: time="2025-05-27T17:38:19.929588476Z" level=info msg="CreateContainer within sandbox \"9173eea7e5e7c2e82d718f0e1c87acadca98499f602c4b62dc02ee7f85c8d74c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e1298d36466864c3fc6c918c51555ee64e9e470e30b863f6237224e728183063\"" May 27 17:38:19.930308 containerd[1593]: time="2025-05-27T17:38:19.930285144Z" level=info msg="StartContainer for \"e1298d36466864c3fc6c918c51555ee64e9e470e30b863f6237224e728183063\"" May 27 17:38:19.931691 containerd[1593]: time="2025-05-27T17:38:19.931650507Z" level=info msg="connecting to shim e1298d36466864c3fc6c918c51555ee64e9e470e30b863f6237224e728183063" address="unix:///run/containerd/s/ed37aee3ed736e2fae87f00f907afa8d3e4dff94bd4bbcf304bb00ccc653f841" protocol=ttrpc version=3 May 27 17:38:19.982283 systemd[1]: Started cri-containerd-e1298d36466864c3fc6c918c51555ee64e9e470e30b863f6237224e728183063.scope - libcontainer container e1298d36466864c3fc6c918c51555ee64e9e470e30b863f6237224e728183063. May 27 17:38:20.068762 containerd[1593]: time="2025-05-27T17:38:20.068698068Z" level=info msg="StartContainer for \"e1298d36466864c3fc6c918c51555ee64e9e470e30b863f6237224e728183063\" returns successfully" May 27 17:38:20.093602 kubelet[2686]: I0527 17:38:20.093444 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6884bd4ddb-5fcc6" podStartSLOduration=22.796021205 podStartE2EDuration="28.093429032s" podCreationTimestamp="2025-05-27 17:37:52 +0000 UTC" firstStartedPulling="2025-05-27 17:38:14.593131633 +0000 UTC m=+40.784691108" lastFinishedPulling="2025-05-27 17:38:19.890539459 +0000 UTC m=+46.082098935" observedRunningTime="2025-05-27 17:38:20.092640372 +0000 UTC m=+46.284199847" watchObservedRunningTime="2025-05-27 17:38:20.093429032 +0000 UTC m=+46.284988507" May 27 17:38:21.080677 kubelet[2686]: I0527 17:38:21.080634 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:38:22.216009 containerd[1593]: time="2025-05-27T17:38:22.215935059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:22.216850 containerd[1593]: time="2025-05-27T17:38:22.216807726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 17:38:22.218162 containerd[1593]: time="2025-05-27T17:38:22.218121803Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:22.220774 containerd[1593]: time="2025-05-27T17:38:22.220718607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:22.221296 containerd[1593]: time="2025-05-27T17:38:22.221266646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.330524397s" May 27 17:38:22.221365 containerd[1593]: time="2025-05-27T17:38:22.221300740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 17:38:22.224294 containerd[1593]: time="2025-05-27T17:38:22.224249795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:38:22.229777 containerd[1593]: time="2025-05-27T17:38:22.229695837Z" level=info msg="CreateContainer within sandbox \"46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 17:38:22.250255 containerd[1593]: time="2025-05-27T17:38:22.250197588Z" level=info msg="Container 6f86ef167ee24cba60158a68ddc4fa74fe725bade5f63e03cca33f6fcb7bf240: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:22.276571 containerd[1593]: time="2025-05-27T17:38:22.276500046Z" level=info msg="CreateContainer within sandbox \"46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6f86ef167ee24cba60158a68ddc4fa74fe725bade5f63e03cca33f6fcb7bf240\"" May 27 17:38:22.277434 containerd[1593]: time="2025-05-27T17:38:22.277376041Z" level=info msg="StartContainer for \"6f86ef167ee24cba60158a68ddc4fa74fe725bade5f63e03cca33f6fcb7bf240\"" May 27 17:38:22.279722 containerd[1593]: time="2025-05-27T17:38:22.279678452Z" level=info msg="connecting to shim 6f86ef167ee24cba60158a68ddc4fa74fe725bade5f63e03cca33f6fcb7bf240" address="unix:///run/containerd/s/be33687264f0d9ff8fe8855883325580eea9ec9c56a30a2d8a8d91765b6acfbf" protocol=ttrpc version=3 May 27 17:38:22.311342 systemd[1]: Started cri-containerd-6f86ef167ee24cba60158a68ddc4fa74fe725bade5f63e03cca33f6fcb7bf240.scope - libcontainer container 6f86ef167ee24cba60158a68ddc4fa74fe725bade5f63e03cca33f6fcb7bf240. May 27 17:38:22.369231 containerd[1593]: time="2025-05-27T17:38:22.369164945Z" level=info msg="StartContainer for \"6f86ef167ee24cba60158a68ddc4fa74fe725bade5f63e03cca33f6fcb7bf240\" returns successfully" May 27 17:38:22.715792 containerd[1593]: time="2025-05-27T17:38:22.715717168Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:22.720731 containerd[1593]: time="2025-05-27T17:38:22.716791064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 17:38:22.720731 containerd[1593]: time="2025-05-27T17:38:22.719138569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 494.844281ms" May 27 17:38:22.720903 containerd[1593]: time="2025-05-27T17:38:22.720756617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:38:22.722206 containerd[1593]: time="2025-05-27T17:38:22.722121619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:38:22.727152 containerd[1593]: time="2025-05-27T17:38:22.727101646Z" level=info msg="CreateContainer within sandbox \"7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:38:22.738132 containerd[1593]: time="2025-05-27T17:38:22.737834972Z" level=info msg="Container c1509839a78134a3fbefe0424c4d6f1f2740be21397dd871aac462e35c5a3f28: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:22.751196 containerd[1593]: time="2025-05-27T17:38:22.751125248Z" level=info msg="CreateContainer within sandbox \"7b6965d17db13b4419cdedadeee6aff6f0c3bc7482785aded1eabd7a4f1dd0a7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c1509839a78134a3fbefe0424c4d6f1f2740be21397dd871aac462e35c5a3f28\"" May 27 17:38:22.753845 containerd[1593]: time="2025-05-27T17:38:22.753715629Z" level=info msg="StartContainer for \"c1509839a78134a3fbefe0424c4d6f1f2740be21397dd871aac462e35c5a3f28\"" May 27 17:38:22.756642 containerd[1593]: time="2025-05-27T17:38:22.756585546Z" level=info msg="connecting to shim c1509839a78134a3fbefe0424c4d6f1f2740be21397dd871aac462e35c5a3f28" address="unix:///run/containerd/s/34b98679e481891ff8da3a02bc9ee19c4e4a6b65d873d6cc131cf3613b19639c" protocol=ttrpc version=3 May 27 17:38:22.783260 systemd[1]: Started cri-containerd-c1509839a78134a3fbefe0424c4d6f1f2740be21397dd871aac462e35c5a3f28.scope - libcontainer container c1509839a78134a3fbefe0424c4d6f1f2740be21397dd871aac462e35c5a3f28. May 27 17:38:22.951055 containerd[1593]: time="2025-05-27T17:38:22.951009331Z" level=info msg="StartContainer for \"c1509839a78134a3fbefe0424c4d6f1f2740be21397dd871aac462e35c5a3f28\" returns successfully" May 27 17:38:22.994999 containerd[1593]: time="2025-05-27T17:38:22.994821885Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:38:23.022706 containerd[1593]: time="2025-05-27T17:38:23.022622894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:38:23.022913 containerd[1593]: time="2025-05-27T17:38:23.022763287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:38:23.023297 kubelet[2686]: E0527 17:38:23.023199 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:38:23.023909 kubelet[2686]: E0527 17:38:23.023306 2686 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:38:23.023909 kubelet[2686]: E0527 17:38:23.023645 2686 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qd8w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-zdrnc_calico-system(d27d1841-eda2-4baf-a093-ef4da86ab409): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:38:23.024040 containerd[1593]: time="2025-05-27T17:38:23.023997674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 17:38:23.025326 kubelet[2686]: E0527 17:38:23.025246 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zdrnc" podUID="d27d1841-eda2-4baf-a093-ef4da86ab409" May 27 17:38:23.101745 kubelet[2686]: E0527 17:38:23.101674 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zdrnc" podUID="d27d1841-eda2-4baf-a093-ef4da86ab409" May 27 17:38:23.111782 kubelet[2686]: I0527 17:38:23.111685 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54cb66b499-cvk4h" podStartSLOduration=27.6043199 podStartE2EDuration="34.111666095s" podCreationTimestamp="2025-05-27 17:37:49 +0000 UTC" firstStartedPulling="2025-05-27 17:38:16.214354243 +0000 UTC m=+42.405913718" lastFinishedPulling="2025-05-27 17:38:22.721700438 +0000 UTC m=+48.913259913" observedRunningTime="2025-05-27 17:38:23.110499435 +0000 UTC m=+49.302058921" watchObservedRunningTime="2025-05-27 17:38:23.111666095 +0000 UTC m=+49.303225570" May 27 17:38:24.102604 kubelet[2686]: I0527 17:38:24.102562 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:38:24.271178 systemd[1]: Started sshd@8-10.0.0.25:22-10.0.0.1:45564.service - OpenSSH per-connection server daemon (10.0.0.1:45564). May 27 17:38:24.352790 sshd[4935]: Accepted publickey for core from 10.0.0.1 port 45564 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:38:24.355052 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:24.361040 systemd-logind[1577]: New session 9 of user core. May 27 17:38:24.372863 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:38:24.526324 sshd[4938]: Connection closed by 10.0.0.1 port 45564 May 27 17:38:24.526673 sshd-session[4935]: pam_unix(sshd:session): session closed for user core May 27 17:38:24.532191 systemd[1]: sshd@8-10.0.0.25:22-10.0.0.1:45564.service: Deactivated successfully. May 27 17:38:24.534750 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:38:24.535946 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. May 27 17:38:24.538394 systemd-logind[1577]: Removed session 9. May 27 17:38:25.718544 containerd[1593]: time="2025-05-27T17:38:25.718463684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:25.719379 containerd[1593]: time="2025-05-27T17:38:25.719317196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 17:38:25.721155 containerd[1593]: time="2025-05-27T17:38:25.721118136Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:25.723625 containerd[1593]: time="2025-05-27T17:38:25.723573444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:25.724365 containerd[1593]: time="2025-05-27T17:38:25.724304877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.70027893s" May 27 17:38:25.724365 containerd[1593]: time="2025-05-27T17:38:25.724357586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 17:38:25.725885 containerd[1593]: time="2025-05-27T17:38:25.725562277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:38:25.730916 containerd[1593]: time="2025-05-27T17:38:25.730862864Z" level=info msg="CreateContainer within sandbox \"46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 17:38:25.742915 containerd[1593]: time="2025-05-27T17:38:25.742835703Z" level=info msg="Container 4f14191450ed579fc0ce12fa78fb3407610b7eb7e8b775a145f981376028888b: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:25.756256 containerd[1593]: time="2025-05-27T17:38:25.756196116Z" level=info msg="CreateContainer within sandbox \"46e88fcf2905bfda61c93be9537d5e177edb331eafa6dca48fac770bddc68e30\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4f14191450ed579fc0ce12fa78fb3407610b7eb7e8b775a145f981376028888b\"" May 27 17:38:25.756926 containerd[1593]: time="2025-05-27T17:38:25.756826910Z" level=info msg="StartContainer for \"4f14191450ed579fc0ce12fa78fb3407610b7eb7e8b775a145f981376028888b\"" May 27 17:38:25.758580 containerd[1593]: time="2025-05-27T17:38:25.758550665Z" level=info msg="connecting to shim 4f14191450ed579fc0ce12fa78fb3407610b7eb7e8b775a145f981376028888b" address="unix:///run/containerd/s/be33687264f0d9ff8fe8855883325580eea9ec9c56a30a2d8a8d91765b6acfbf" protocol=ttrpc version=3 May 27 17:38:25.796461 systemd[1]: Started cri-containerd-4f14191450ed579fc0ce12fa78fb3407610b7eb7e8b775a145f981376028888b.scope - libcontainer container 4f14191450ed579fc0ce12fa78fb3407610b7eb7e8b775a145f981376028888b. May 27 17:38:25.851181 containerd[1593]: time="2025-05-27T17:38:25.851129835Z" level=info msg="StartContainer for \"4f14191450ed579fc0ce12fa78fb3407610b7eb7e8b775a145f981376028888b\" returns successfully" May 27 17:38:25.981581 kubelet[2686]: I0527 17:38:25.981405 2686 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 17:38:25.983244 kubelet[2686]: I0527 17:38:25.983191 2686 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 17:38:26.003056 containerd[1593]: time="2025-05-27T17:38:26.002970682Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:38:26.004150 containerd[1593]: time="2025-05-27T17:38:26.004112705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:38:26.004235 containerd[1593]: time="2025-05-27T17:38:26.004154743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:38:26.004447 kubelet[2686]: E0527 17:38:26.004402 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:38:26.004616 kubelet[2686]: E0527 17:38:26.004457 2686 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:38:26.004709 kubelet[2686]: E0527 17:38:26.004601 2686 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a3540ee0cc6d4d5ba3e9e8680912f34a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wnsmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-854ff896c6-s24tx_calico-system(8474f19b-dee7-4c30-b00b-6e2228a0c605): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:38:26.006887 containerd[1593]: time="2025-05-27T17:38:26.006838269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:38:26.276337 containerd[1593]: time="2025-05-27T17:38:26.276145840Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:38:26.277693 containerd[1593]: time="2025-05-27T17:38:26.277643030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:38:26.277824 containerd[1593]: time="2025-05-27T17:38:26.277694947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:38:26.278022 kubelet[2686]: E0527 17:38:26.277934 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:38:26.278022 kubelet[2686]: E0527 17:38:26.278000 2686 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:38:26.278391 kubelet[2686]: E0527 17:38:26.278197 2686 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnsmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-854ff896c6-s24tx_calico-system(8474f19b-dee7-4c30-b00b-6e2228a0c605): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:38:26.279451 kubelet[2686]: E0527 17:38:26.279395 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-854ff896c6-s24tx" podUID="8474f19b-dee7-4c30-b00b-6e2228a0c605" May 27 17:38:29.001176 kubelet[2686]: I0527 17:38:29.001038 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:38:29.093377 containerd[1593]: time="2025-05-27T17:38:29.093309089Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1298d36466864c3fc6c918c51555ee64e9e470e30b863f6237224e728183063\" id:\"be15fef12ff84d16e57fe015e3085d03cb4adec602d998ddc3b59a326ff8faf8\" pid:5005 exited_at:{seconds:1748367509 nanos:92820202}" May 27 17:38:29.126035 kubelet[2686]: I0527 17:38:29.125011 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-228hb" podStartSLOduration=26.614347867 podStartE2EDuration="37.124987861s" podCreationTimestamp="2025-05-27 17:37:52 +0000 UTC" firstStartedPulling="2025-05-27 17:38:15.214634172 +0000 UTC m=+41.406193647" lastFinishedPulling="2025-05-27 17:38:25.725274165 +0000 UTC m=+51.916833641" observedRunningTime="2025-05-27 17:38:26.123574736 +0000 UTC m=+52.315134211" watchObservedRunningTime="2025-05-27 17:38:29.124987861 +0000 UTC m=+55.316547336" May 27 17:38:29.163254 containerd[1593]: time="2025-05-27T17:38:29.163194511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1298d36466864c3fc6c918c51555ee64e9e470e30b863f6237224e728183063\" id:\"1f55d6ff43b514a9c7b9dedec54614bbde946ebdb4e5dcf0e6fcdecd08bc196a\" pid:5027 exited_at:{seconds:1748367509 nanos:162192861}" May 27 17:38:29.543766 systemd[1]: Started sshd@9-10.0.0.25:22-10.0.0.1:45572.service - OpenSSH per-connection server daemon (10.0.0.1:45572). May 27 17:38:29.599724 sshd[5038]: Accepted publickey for core from 10.0.0.1 port 45572 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:38:29.601671 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:29.607183 systemd-logind[1577]: New session 10 of user core. May 27 17:38:29.617265 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:38:29.756413 sshd[5041]: Connection closed by 10.0.0.1 port 45572 May 27 17:38:29.756845 sshd-session[5038]: pam_unix(sshd:session): session closed for user core May 27 17:38:29.761371 systemd[1]: sshd@9-10.0.0.25:22-10.0.0.1:45572.service: Deactivated successfully. May 27 17:38:29.763804 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:38:29.765090 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. May 27 17:38:29.766644 systemd-logind[1577]: Removed session 10. May 27 17:38:33.910883 containerd[1593]: time="2025-05-27T17:38:33.910653366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:38:34.202444 containerd[1593]: time="2025-05-27T17:38:34.202291882Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:38:34.203745 containerd[1593]: time="2025-05-27T17:38:34.203708620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:38:34.203878 containerd[1593]: time="2025-05-27T17:38:34.203834656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:38:34.204032 kubelet[2686]: E0527 17:38:34.203978 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:38:34.204459 kubelet[2686]: E0527 17:38:34.204039 2686 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:38:34.204459 kubelet[2686]: E0527 17:38:34.204209 2686 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qd8w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-zdrnc_calico-system(d27d1841-eda2-4baf-a093-ef4da86ab409): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:38:34.205466 kubelet[2686]: E0527 17:38:34.205406 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zdrnc" podUID="d27d1841-eda2-4baf-a093-ef4da86ab409" May 27 17:38:34.782500 systemd[1]: Started sshd@10-10.0.0.25:22-10.0.0.1:55716.service - OpenSSH per-connection server daemon (10.0.0.1:55716). May 27 17:38:34.839715 sshd[5065]: Accepted publickey for core from 10.0.0.1 port 55716 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:38:34.841181 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:34.845878 systemd-logind[1577]: New session 11 of user core. May 27 17:38:34.856229 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:38:34.998083 sshd[5067]: Connection closed by 10.0.0.1 port 55716 May 27 17:38:34.999875 sshd-session[5065]: pam_unix(sshd:session): session closed for user core May 27 17:38:35.009282 systemd[1]: sshd@10-10.0.0.25:22-10.0.0.1:55716.service: Deactivated successfully. May 27 17:38:35.011448 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:38:35.012488 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. May 27 17:38:35.016785 systemd[1]: Started sshd@11-10.0.0.25:22-10.0.0.1:55726.service - OpenSSH per-connection server daemon (10.0.0.1:55726). May 27 17:38:35.017450 systemd-logind[1577]: Removed session 11. May 27 17:38:35.229925 sshd[5081]: Accepted publickey for core from 10.0.0.1 port 55726 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:38:35.231976 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:35.237316 systemd-logind[1577]: New session 12 of user core. May 27 17:38:35.252319 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:38:35.433248 sshd[5083]: Connection closed by 10.0.0.1 port 55726 May 27 17:38:35.433515 sshd-session[5081]: pam_unix(sshd:session): session closed for user core May 27 17:38:35.444524 systemd[1]: sshd@11-10.0.0.25:22-10.0.0.1:55726.service: Deactivated successfully. May 27 17:38:35.446721 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:38:35.447815 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. May 27 17:38:35.451653 systemd[1]: Started sshd@12-10.0.0.25:22-10.0.0.1:55742.service - OpenSSH per-connection server daemon (10.0.0.1:55742). May 27 17:38:35.452778 systemd-logind[1577]: Removed session 12. May 27 17:38:35.501983 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 55742 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:38:35.503836 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:35.508751 systemd-logind[1577]: New session 13 of user core. May 27 17:38:35.525329 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:38:35.643605 sshd[5097]: Connection closed by 10.0.0.1 port 55742 May 27 17:38:35.643991 sshd-session[5095]: pam_unix(sshd:session): session closed for user core May 27 17:38:35.648794 systemd[1]: sshd@12-10.0.0.25:22-10.0.0.1:55742.service: Deactivated successfully. May 27 17:38:35.651033 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:38:35.651931 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. May 27 17:38:35.653595 systemd-logind[1577]: Removed session 13. May 27 17:38:40.664786 systemd[1]: Started sshd@13-10.0.0.25:22-10.0.0.1:55746.service - OpenSSH per-connection server daemon (10.0.0.1:55746). May 27 17:38:40.713892 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 55746 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:38:40.716028 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:40.721188 systemd-logind[1577]: New session 14 of user core. May 27 17:38:40.729325 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:38:40.846777 sshd[5120]: Connection closed by 10.0.0.1 port 55746 May 27 17:38:40.847181 sshd-session[5118]: pam_unix(sshd:session): session closed for user core May 27 17:38:40.851800 systemd[1]: sshd@13-10.0.0.25:22-10.0.0.1:55746.service: Deactivated successfully. May 27 17:38:40.854539 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:38:40.855929 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. May 27 17:38:40.858007 systemd-logind[1577]: Removed session 14. May 27 17:38:40.911624 kubelet[2686]: E0527 17:38:40.911541 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-854ff896c6-s24tx" podUID="8474f19b-dee7-4c30-b00b-6e2228a0c605" May 27 17:38:41.114216 containerd[1593]: time="2025-05-27T17:38:41.114143331Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10ebdc86074df58b9c00d3f41f3f3e81ec6e5f09f2abd1bf0eb8c884cf35b597\" id:\"fcbe9e82bf2f2d4c7e0df3e0e8d7b1a40098509e6ebbc7ab70fd45c9fcc85d3e\" pid:5144 exit_status:1 exited_at:{seconds:1748367521 nanos:113370195}" May 27 17:38:41.228684 containerd[1593]: time="2025-05-27T17:38:41.228642846Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10ebdc86074df58b9c00d3f41f3f3e81ec6e5f09f2abd1bf0eb8c884cf35b597\" id:\"31135aeebbffc541784eb2fbb2886e4a6b1c0127d76f2d77fb53aaabea8e5df7\" pid:5168 exit_status:1 exited_at:{seconds:1748367521 nanos:228372106}" May 27 17:38:44.630695 kubelet[2686]: I0527 17:38:44.630627 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:38:45.865332 systemd[1]: Started sshd@14-10.0.0.25:22-10.0.0.1:56304.service - OpenSSH per-connection server daemon (10.0.0.1:56304). May 27 17:38:45.926766 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 56304 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:38:45.929778 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:45.934210 systemd-logind[1577]: New session 15 of user core. May 27 17:38:45.945266 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:38:46.093573 sshd[5188]: Connection closed by 10.0.0.1 port 56304 May 27 17:38:46.098363 systemd[1]: sshd@14-10.0.0.25:22-10.0.0.1:56304.service: Deactivated successfully. May 27 17:38:46.093927 sshd-session[5186]: pam_unix(sshd:session): session closed for user core May 27 17:38:46.100636 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:38:46.101546 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. May 27 17:38:46.102905 systemd-logind[1577]: Removed session 15. May 27 17:38:46.910591 kubelet[2686]: E0527 17:38:46.910535 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zdrnc" podUID="d27d1841-eda2-4baf-a093-ef4da86ab409" May 27 17:38:51.114383 systemd[1]: Started sshd@15-10.0.0.25:22-10.0.0.1:56310.service - OpenSSH per-connection server daemon (10.0.0.1:56310). May 27 17:38:51.175551 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 56310 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:38:51.177221 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:51.182351 systemd-logind[1577]: New session 16 of user core. May 27 17:38:51.193233 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:38:51.305949 sshd[5205]: Connection closed by 10.0.0.1 port 56310 May 27 17:38:51.306289 sshd-session[5203]: pam_unix(sshd:session): session closed for user core May 27 17:38:51.311432 systemd[1]: sshd@15-10.0.0.25:22-10.0.0.1:56310.service: Deactivated successfully. May 27 17:38:51.313589 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:38:51.314390 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. May 27 17:38:51.315824 systemd-logind[1577]: Removed session 16. May 27 17:38:55.911485 containerd[1593]: time="2025-05-27T17:38:55.911433228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:38:56.159372 containerd[1593]: time="2025-05-27T17:38:56.159308567Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:38:56.160824 containerd[1593]: time="2025-05-27T17:38:56.160756908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:38:56.160973 containerd[1593]: time="2025-05-27T17:38:56.160870935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:38:56.161114 kubelet[2686]: E0527 17:38:56.161046 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:38:56.161682 kubelet[2686]: E0527 17:38:56.161131 2686 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:38:56.161682 kubelet[2686]: E0527 17:38:56.161265 2686 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a3540ee0cc6d4d5ba3e9e8680912f34a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wnsmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-854ff896c6-s24tx_calico-system(8474f19b-dee7-4c30-b00b-6e2228a0c605): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:38:56.163633 containerd[1593]: time="2025-05-27T17:38:56.163555232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:38:56.319314 systemd[1]: Started sshd@16-10.0.0.25:22-10.0.0.1:58826.service - OpenSSH per-connection server daemon (10.0.0.1:58826). May 27 17:38:56.390005 sshd[5225]: Accepted publickey for core from 10.0.0.1 port 58826 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:38:56.392338 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:56.399802 systemd-logind[1577]: New session 17 of user core. May 27 17:38:56.406358 containerd[1593]: time="2025-05-27T17:38:56.406280823Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:38:56.407801 containerd[1593]: time="2025-05-27T17:38:56.407759972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:38:56.407939 containerd[1593]: time="2025-05-27T17:38:56.407824595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:38:56.408172 kubelet[2686]: E0527 17:38:56.408105 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:38:56.408268 kubelet[2686]: E0527 17:38:56.408188 2686 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:38:56.408411 kubelet[2686]: E0527 17:38:56.408358 2686 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnsmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-854ff896c6-s24tx_calico-system(8474f19b-dee7-4c30-b00b-6e2228a0c605): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:38:56.409644 kubelet[2686]: E0527 17:38:56.409570 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-854ff896c6-s24tx" podUID="8474f19b-dee7-4c30-b00b-6e2228a0c605" May 27 17:38:56.414411 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:38:56.549691 sshd[5227]: Connection closed by 10.0.0.1 port 58826 May 27 17:38:56.550095 sshd-session[5225]: pam_unix(sshd:session): session closed for user core May 27 17:38:56.555132 systemd[1]: sshd@16-10.0.0.25:22-10.0.0.1:58826.service: Deactivated successfully. May 27 17:38:56.558690 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:38:56.559732 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. May 27 17:38:56.562211 systemd-logind[1577]: Removed session 17. May 27 17:38:59.150542 containerd[1593]: time="2025-05-27T17:38:59.150490539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1298d36466864c3fc6c918c51555ee64e9e470e30b863f6237224e728183063\" id:\"10e9e38faa8e620b993dbd7d680113df42f152721c13604491124ba3ba729eff\" pid:5254 exited_at:{seconds:1748367539 nanos:150189766}" May 27 17:39:00.911196 containerd[1593]: time="2025-05-27T17:39:00.911126000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:39:01.176183 containerd[1593]: time="2025-05-27T17:39:01.176022293Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:39:01.177298 containerd[1593]: time="2025-05-27T17:39:01.177219871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:39:01.177402 containerd[1593]: time="2025-05-27T17:39:01.177283752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:39:01.177498 kubelet[2686]: E0527 17:39:01.177450 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:39:01.177867 kubelet[2686]: E0527 17:39:01.177505 2686 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:39:01.177867 kubelet[2686]: E0527 17:39:01.177651 2686 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qd8w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-zdrnc_calico-system(d27d1841-eda2-4baf-a093-ef4da86ab409): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:39:01.178854 kubelet[2686]: E0527 17:39:01.178814 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zdrnc" podUID="d27d1841-eda2-4baf-a093-ef4da86ab409" May 27 17:39:01.575529 systemd[1]: Started sshd@17-10.0.0.25:22-10.0.0.1:58834.service - OpenSSH per-connection server daemon (10.0.0.1:58834). May 27 17:39:01.623900 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 58834 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:01.625434 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:01.629538 systemd-logind[1577]: New session 18 of user core. May 27 17:39:01.640220 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:39:01.757221 sshd[5267]: Connection closed by 10.0.0.1 port 58834 May 27 17:39:01.757541 sshd-session[5265]: pam_unix(sshd:session): session closed for user core May 27 17:39:01.773965 systemd[1]: sshd@17-10.0.0.25:22-10.0.0.1:58834.service: Deactivated successfully. May 27 17:39:01.776029 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:39:01.776874 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. May 27 17:39:01.780129 systemd[1]: Started sshd@18-10.0.0.25:22-10.0.0.1:58844.service - OpenSSH per-connection server daemon (10.0.0.1:58844). May 27 17:39:01.780781 systemd-logind[1577]: Removed session 18. May 27 17:39:01.830027 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 58844 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:01.831685 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:01.836390 systemd-logind[1577]: New session 19 of user core. May 27 17:39:01.851273 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:39:02.229758 sshd[5283]: Connection closed by 10.0.0.1 port 58844 May 27 17:39:02.230351 sshd-session[5281]: pam_unix(sshd:session): session closed for user core May 27 17:39:02.241655 systemd[1]: sshd@18-10.0.0.25:22-10.0.0.1:58844.service: Deactivated successfully. May 27 17:39:02.244231 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:39:02.245173 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. May 27 17:39:02.249877 systemd[1]: Started sshd@19-10.0.0.25:22-10.0.0.1:58848.service - OpenSSH per-connection server daemon (10.0.0.1:58848). May 27 17:39:02.250594 systemd-logind[1577]: Removed session 19. May 27 17:39:02.315848 sshd[5295]: Accepted publickey for core from 10.0.0.1 port 58848 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:02.317877 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:02.323242 systemd-logind[1577]: New session 20 of user core. May 27 17:39:02.332232 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:39:03.144683 sshd[5297]: Connection closed by 10.0.0.1 port 58848 May 27 17:39:03.145194 sshd-session[5295]: pam_unix(sshd:session): session closed for user core May 27 17:39:03.157454 systemd[1]: sshd@19-10.0.0.25:22-10.0.0.1:58848.service: Deactivated successfully. May 27 17:39:03.161502 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:39:03.164149 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. May 27 17:39:03.169876 systemd[1]: Started sshd@20-10.0.0.25:22-10.0.0.1:36192.service - OpenSSH per-connection server daemon (10.0.0.1:36192). May 27 17:39:03.172977 systemd-logind[1577]: Removed session 20. May 27 17:39:03.228531 sshd[5318]: Accepted publickey for core from 10.0.0.1 port 36192 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:03.230340 sshd-session[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:03.235364 systemd-logind[1577]: New session 21 of user core. May 27 17:39:03.246318 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:39:03.549842 sshd[5320]: Connection closed by 10.0.0.1 port 36192 May 27 17:39:03.550315 sshd-session[5318]: pam_unix(sshd:session): session closed for user core May 27 17:39:03.563790 systemd[1]: sshd@20-10.0.0.25:22-10.0.0.1:36192.service: Deactivated successfully. May 27 17:39:03.566392 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:39:03.567550 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. May 27 17:39:03.571730 systemd[1]: Started sshd@21-10.0.0.25:22-10.0.0.1:36202.service - OpenSSH per-connection server daemon (10.0.0.1:36202). May 27 17:39:03.572940 systemd-logind[1577]: Removed session 21. May 27 17:39:03.627234 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 36202 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:03.629238 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:03.634897 systemd-logind[1577]: New session 22 of user core. May 27 17:39:03.644294 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:39:03.758257 sshd[5334]: Connection closed by 10.0.0.1 port 36202 May 27 17:39:03.758566 sshd-session[5332]: pam_unix(sshd:session): session closed for user core May 27 17:39:03.764513 systemd[1]: sshd@21-10.0.0.25:22-10.0.0.1:36202.service: Deactivated successfully. May 27 17:39:03.766915 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:39:03.767727 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. May 27 17:39:03.769418 systemd-logind[1577]: Removed session 22. May 27 17:39:08.775677 systemd[1]: Started sshd@22-10.0.0.25:22-10.0.0.1:36210.service - OpenSSH per-connection server daemon (10.0.0.1:36210). May 27 17:39:08.860694 sshd[5347]: Accepted publickey for core from 10.0.0.1 port 36210 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:08.862682 sshd-session[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:08.867489 systemd-logind[1577]: New session 23 of user core. May 27 17:39:08.875295 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:39:08.910818 kubelet[2686]: E0527 17:39:08.910741 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-854ff896c6-s24tx" podUID="8474f19b-dee7-4c30-b00b-6e2228a0c605" May 27 17:39:09.038120 sshd[5349]: Connection closed by 10.0.0.1 port 36210 May 27 17:39:09.038816 sshd-session[5347]: pam_unix(sshd:session): session closed for user core May 27 17:39:09.045178 systemd[1]: sshd@22-10.0.0.25:22-10.0.0.1:36210.service: Deactivated successfully. May 27 17:39:09.048036 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:39:09.049055 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. May 27 17:39:09.050978 systemd-logind[1577]: Removed session 23. May 27 17:39:11.214244 containerd[1593]: time="2025-05-27T17:39:11.214185105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10ebdc86074df58b9c00d3f41f3f3e81ec6e5f09f2abd1bf0eb8c884cf35b597\" id:\"f42e246c5405b95b74369f22ee28ac20f68c7e6f1cc44c257aeded9ee609dd57\" pid:5378 exited_at:{seconds:1748367551 nanos:213653586}" May 27 17:39:12.909929 kubelet[2686]: E0527 17:39:12.909884 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zdrnc" podUID="d27d1841-eda2-4baf-a093-ef4da86ab409" May 27 17:39:14.052359 systemd[1]: Started sshd@23-10.0.0.25:22-10.0.0.1:34938.service - OpenSSH per-connection server daemon (10.0.0.1:34938). May 27 17:39:14.117771 sshd[5392]: Accepted publickey for core from 10.0.0.1 port 34938 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:14.119526 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:14.124681 systemd-logind[1577]: New session 24 of user core. May 27 17:39:14.137292 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:39:14.307887 sshd[5394]: Connection closed by 10.0.0.1 port 34938 May 27 17:39:14.308278 sshd-session[5392]: pam_unix(sshd:session): session closed for user core May 27 17:39:14.313010 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. May 27 17:39:14.315999 systemd[1]: sshd@23-10.0.0.25:22-10.0.0.1:34938.service: Deactivated successfully. May 27 17:39:14.320573 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:39:14.323650 systemd-logind[1577]: Removed session 24. May 27 17:39:19.326679 systemd[1]: Started sshd@24-10.0.0.25:22-10.0.0.1:34948.service - OpenSSH per-connection server daemon (10.0.0.1:34948). May 27 17:39:19.382096 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 34948 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:19.383721 sshd-session[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:19.397312 systemd-logind[1577]: New session 25 of user core. May 27 17:39:19.403303 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:39:19.529454 sshd[5410]: Connection closed by 10.0.0.1 port 34948 May 27 17:39:19.529805 sshd-session[5408]: pam_unix(sshd:session): session closed for user core May 27 17:39:19.534684 systemd[1]: sshd@24-10.0.0.25:22-10.0.0.1:34948.service: Deactivated successfully. May 27 17:39:19.536833 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:39:19.537895 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. May 27 17:39:19.539589 systemd-logind[1577]: Removed session 25.