May 27 03:15:30.854894 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:15:30.854916 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:15:30.854927 kernel: BIOS-provided physical RAM map: May 27 03:15:30.854934 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable May 27 03:15:30.854940 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved May 27 03:15:30.854947 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable May 27 03:15:30.854955 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved May 27 03:15:30.854982 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable May 27 03:15:30.854989 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 27 03:15:30.854996 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 27 03:15:30.855002 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 27 03:15:30.855011 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 27 03:15:30.855018 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 27 03:15:30.855025 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 27 03:15:30.855032 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 27 03:15:30.855039 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved May 27 03:15:30.855049 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:15:30.855066 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:15:30.855085 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:15:30.855097 kernel: NX (Execute Disable) protection: active May 27 03:15:30.855104 kernel: APIC: Static calls initialized May 27 03:15:30.855111 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable May 27 03:15:30.855126 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable May 27 03:15:30.855133 kernel: extended physical RAM map: May 27 03:15:30.855140 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable May 27 03:15:30.855147 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved May 27 03:15:30.855154 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable May 27 03:15:30.855165 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved May 27 03:15:30.855172 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable May 27 03:15:30.855186 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable May 27 03:15:30.855193 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable May 27 03:15:30.855199 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable May 27 03:15:30.855206 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable May 27 03:15:30.855213 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 27 03:15:30.855220 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 27 03:15:30.855227 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 27 03:15:30.855234 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 27 03:15:30.855241 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 27 03:15:30.855251 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 27 03:15:30.855258 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 27 03:15:30.855268 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved May 27 03:15:30.855276 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:15:30.855283 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:15:30.855290 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:15:30.855300 kernel: efi: EFI v2.7 by EDK II May 27 03:15:30.855307 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 May 27 03:15:30.855315 kernel: random: crng init done May 27 03:15:30.855331 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 27 03:15:30.855339 kernel: secureboot: Secure boot enabled May 27 03:15:30.855346 kernel: SMBIOS 2.8 present. May 27 03:15:30.855353 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 27 03:15:30.855360 kernel: DMI: Memory slots populated: 1/1 May 27 03:15:30.855368 kernel: Hypervisor detected: KVM May 27 03:15:30.855375 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:15:30.855382 kernel: kvm-clock: using sched offset of 6159186844 cycles May 27 03:15:30.855393 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:15:30.855401 kernel: tsc: Detected 2794.748 MHz processor May 27 03:15:30.855408 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:15:30.855416 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:15:30.855423 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 May 27 03:15:30.855431 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 03:15:30.855438 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:15:30.855445 kernel: Using GB pages for direct mapping May 27 03:15:30.855453 kernel: ACPI: Early table checksum verification disabled May 27 03:15:30.855463 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) May 27 03:15:30.855470 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 27 03:15:30.855478 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:30.855485 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:30.855492 kernel: ACPI: FACS 0x000000009BBDD000 000040 May 27 03:15:30.855500 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:30.855507 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:30.855515 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:30.855524 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:30.855546 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 27 03:15:30.855564 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] May 27 03:15:30.855579 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] May 27 03:15:30.855586 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] May 27 03:15:30.855594 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] May 27 03:15:30.855601 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] May 27 03:15:30.855608 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] May 27 03:15:30.855616 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] May 27 03:15:30.855623 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] May 27 03:15:30.855634 kernel: No NUMA configuration found May 27 03:15:30.855641 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] May 27 03:15:30.855649 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] May 27 03:15:30.855656 kernel: Zone ranges: May 27 03:15:30.855664 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:15:30.855671 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] May 27 03:15:30.855678 kernel: Normal empty May 27 03:15:30.855685 kernel: Device empty May 27 03:15:30.855693 kernel: Movable zone start for each node May 27 03:15:30.855703 kernel: Early memory node ranges May 27 03:15:30.855710 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] May 27 03:15:30.855718 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] May 27 03:15:30.855725 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] May 27 03:15:30.855732 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] May 27 03:15:30.855739 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] May 27 03:15:30.855747 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] May 27 03:15:30.855754 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:15:30.855761 kernel: On node 0, zone DMA: 32 pages in unavailable ranges May 27 03:15:30.855778 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 03:15:30.855790 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 03:15:30.855809 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 27 03:15:30.855817 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges May 27 03:15:30.855832 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:15:30.855839 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:15:30.855847 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:15:30.855864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:15:30.855876 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:15:30.855898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:15:30.855906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:15:30.855913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:15:30.855920 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:15:30.855928 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:15:30.855935 kernel: TSC deadline timer available May 27 03:15:30.855942 kernel: CPU topo: Max. logical packages: 1 May 27 03:15:30.855950 kernel: CPU topo: Max. logical dies: 1 May 27 03:15:30.855957 kernel: CPU topo: Max. dies per package: 1 May 27 03:15:30.856076 kernel: CPU topo: Max. threads per core: 1 May 27 03:15:30.856092 kernel: CPU topo: Num. cores per package: 4 May 27 03:15:30.856107 kernel: CPU topo: Num. threads per package: 4 May 27 03:15:30.856118 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 03:15:30.856126 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:15:30.856134 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:15:30.856141 kernel: kvm-guest: setup PV sched yield May 27 03:15:30.856149 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 27 03:15:30.856159 kernel: Booting paravirtualized kernel on KVM May 27 03:15:30.856167 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:15:30.856176 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 03:15:30.856185 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 03:15:30.856194 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 03:15:30.856203 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 03:15:30.856211 kernel: kvm-guest: PV spinlocks enabled May 27 03:15:30.856218 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:15:30.856227 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:15:30.856238 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:15:30.856246 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:15:30.856253 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:15:30.856261 kernel: Fallback order for Node 0: 0 May 27 03:15:30.856269 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 May 27 03:15:30.856276 kernel: Policy zone: DMA32 May 27 03:15:30.856284 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:15:30.856292 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 03:15:30.856302 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:15:30.856309 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:15:30.856317 kernel: Dynamic Preempt: voluntary May 27 03:15:30.856324 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:15:30.856333 kernel: rcu: RCU event tracing is enabled. May 27 03:15:30.856341 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 03:15:30.856349 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:15:30.856356 kernel: Rude variant of Tasks RCU enabled. May 27 03:15:30.856364 kernel: Tracing variant of Tasks RCU enabled. May 27 03:15:30.856374 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:15:30.856382 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 03:15:30.856390 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:15:30.856398 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:15:30.856406 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:15:30.856414 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 03:15:30.856421 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:15:30.856429 kernel: Console: colour dummy device 80x25 May 27 03:15:30.856437 kernel: printk: legacy console [ttyS0] enabled May 27 03:15:30.856447 kernel: ACPI: Core revision 20240827 May 27 03:15:30.856454 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:15:30.856462 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:15:30.856470 kernel: x2apic enabled May 27 03:15:30.856477 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:15:30.856485 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:15:30.856493 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:15:30.856501 kernel: kvm-guest: setup PV IPIs May 27 03:15:30.856508 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:15:30.856519 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:15:30.856531 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 03:15:30.856546 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:15:30.856554 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:15:30.856561 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:15:30.856569 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:15:30.856577 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:15:30.856588 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:15:30.856596 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 03:15:30.856608 kernel: RETBleed: Mitigation: untrained return thunk May 27 03:15:30.856618 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:15:30.856626 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:15:30.856634 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:15:30.856642 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:15:30.856650 kernel: x86/bugs: return thunk changed May 27 03:15:30.856658 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:15:30.856665 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:15:30.856676 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:15:30.856684 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:15:30.856691 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:15:30.856699 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 03:15:30.856707 kernel: Freeing SMP alternatives memory: 32K May 27 03:15:30.856715 kernel: pid_max: default: 32768 minimum: 301 May 27 03:15:30.856722 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:15:30.856730 kernel: landlock: Up and running. May 27 03:15:30.856738 kernel: SELinux: Initializing. May 27 03:15:30.856748 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:15:30.856756 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:15:30.856764 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 03:15:30.856771 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:15:30.856779 kernel: ... version: 0 May 27 03:15:30.856787 kernel: ... bit width: 48 May 27 03:15:30.856794 kernel: ... generic registers: 6 May 27 03:15:30.856802 kernel: ... value mask: 0000ffffffffffff May 27 03:15:30.856810 kernel: ... max period: 00007fffffffffff May 27 03:15:30.856827 kernel: ... fixed-purpose events: 0 May 27 03:15:30.856835 kernel: ... event mask: 000000000000003f May 27 03:15:30.856843 kernel: signal: max sigframe size: 1776 May 27 03:15:30.856850 kernel: rcu: Hierarchical SRCU implementation. May 27 03:15:30.856858 kernel: rcu: Max phase no-delay instances is 400. May 27 03:15:30.856866 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:15:30.856874 kernel: smp: Bringing up secondary CPUs ... May 27 03:15:30.856881 kernel: smpboot: x86: Booting SMP configuration: May 27 03:15:30.856889 kernel: .... node #0, CPUs: #1 #2 #3 May 27 03:15:30.856897 kernel: smp: Brought up 1 node, 4 CPUs May 27 03:15:30.856907 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 03:15:30.856915 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 137064K reserved, 0K cma-reserved) May 27 03:15:30.856922 kernel: devtmpfs: initialized May 27 03:15:30.856930 kernel: x86/mm: Memory block size: 128MB May 27 03:15:30.856938 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) May 27 03:15:30.856946 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) May 27 03:15:30.856954 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:15:30.856974 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 03:15:30.856984 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:15:30.856992 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:15:30.856999 kernel: audit: initializing netlink subsys (disabled) May 27 03:15:30.857007 kernel: audit: type=2000 audit(1748315728.299:1): state=initialized audit_enabled=0 res=1 May 27 03:15:30.857015 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:15:30.857023 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:15:30.857031 kernel: cpuidle: using governor menu May 27 03:15:30.857038 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:15:30.857046 kernel: dca service started, version 1.12.1 May 27 03:15:30.857056 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 27 03:15:30.857064 kernel: PCI: Using configuration type 1 for base access May 27 03:15:30.857072 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:15:30.857079 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:15:30.857087 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:15:30.857095 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:15:30.857103 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:15:30.857110 kernel: ACPI: Added _OSI(Module Device) May 27 03:15:30.857118 kernel: ACPI: Added _OSI(Processor Device) May 27 03:15:30.857127 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:15:30.857135 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:15:30.857143 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:15:30.857150 kernel: ACPI: Interpreter enabled May 27 03:15:30.857158 kernel: ACPI: PM: (supports S0 S5) May 27 03:15:30.857166 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:15:30.857173 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:15:30.857181 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:15:30.857189 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:15:30.857199 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:15:30.857467 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:15:30.857652 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:15:30.857798 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:15:30.857810 kernel: PCI host bridge to bus 0000:00 May 27 03:15:30.857953 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:15:30.858101 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:15:30.858272 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:15:30.858449 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 27 03:15:30.858593 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 27 03:15:30.858715 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 27 03:15:30.858844 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:15:30.859123 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:15:30.859285 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:15:30.859412 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 27 03:15:30.859593 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 27 03:15:30.859730 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 03:15:30.859863 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:15:30.860044 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 03:15:30.860210 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 27 03:15:30.860388 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 27 03:15:30.860572 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 27 03:15:30.860738 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 03:15:30.860872 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 27 03:15:30.861024 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 27 03:15:30.861159 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 27 03:15:30.861338 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:15:30.861546 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 27 03:15:30.861760 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 27 03:15:30.861892 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 27 03:15:30.862071 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 27 03:15:30.862261 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:15:30.863126 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:15:30.863342 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:15:30.863493 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 27 03:15:30.863723 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 27 03:15:30.864097 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:15:30.864323 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 27 03:15:30.864344 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:15:30.864353 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:15:30.864365 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:15:30.864391 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:15:30.864400 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:15:30.864418 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:15:30.864436 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:15:30.864450 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:15:30.864459 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:15:30.864467 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:15:30.864481 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:15:30.864489 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:15:30.864501 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:15:30.864509 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:15:30.864518 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:15:30.864526 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:15:30.864534 kernel: iommu: Default domain type: Translated May 27 03:15:30.864542 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:15:30.864550 kernel: efivars: Registered efivars operations May 27 03:15:30.864558 kernel: PCI: Using ACPI for IRQ routing May 27 03:15:30.864566 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:15:30.864578 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] May 27 03:15:30.864585 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] May 27 03:15:30.864593 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] May 27 03:15:30.864601 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] May 27 03:15:30.864609 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] May 27 03:15:30.864740 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:15:30.864891 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:15:30.865223 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:15:30.865251 kernel: vgaarb: loaded May 27 03:15:30.865259 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:15:30.865278 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:15:30.865295 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:15:30.865316 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:15:30.865339 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:15:30.865350 kernel: pnp: PnP ACPI init May 27 03:15:30.865658 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 27 03:15:30.865674 kernel: pnp: PnP ACPI: found 6 devices May 27 03:15:30.865686 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:15:30.865695 kernel: NET: Registered PF_INET protocol family May 27 03:15:30.865703 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:15:30.865711 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:15:30.865719 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:15:30.865728 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:15:30.865736 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:15:30.865744 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:15:30.865755 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:15:30.865763 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:15:30.865771 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:15:30.865779 kernel: NET: Registered PF_XDP protocol family May 27 03:15:30.865945 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 27 03:15:30.866118 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 27 03:15:30.866241 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:15:30.866351 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:15:30.866467 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:15:30.866588 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 27 03:15:30.866698 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 27 03:15:30.866805 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 27 03:15:30.866816 kernel: PCI: CLS 0 bytes, default 64 May 27 03:15:30.866834 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:15:30.866842 kernel: Initialise system trusted keyrings May 27 03:15:30.866850 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:15:30.866859 kernel: Key type asymmetric registered May 27 03:15:30.866872 kernel: Asymmetric key parser 'x509' registered May 27 03:15:30.866896 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:15:30.866907 kernel: io scheduler mq-deadline registered May 27 03:15:30.866915 kernel: io scheduler kyber registered May 27 03:15:30.866923 kernel: io scheduler bfq registered May 27 03:15:30.866931 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:15:30.866940 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:15:30.866948 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:15:30.866957 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 03:15:30.867056 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:15:30.867064 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:15:30.867073 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:15:30.867081 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:15:30.867089 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:15:30.867231 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 03:15:30.867244 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:15:30.867360 kernel: rtc_cmos 00:04: registered as rtc0 May 27 03:15:30.867476 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T03:15:30 UTC (1748315730) May 27 03:15:30.867587 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 27 03:15:30.867598 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:15:30.867615 kernel: efifb: probing for efifb May 27 03:15:30.867623 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 27 03:15:30.867632 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 27 03:15:30.867640 kernel: efifb: scrolling: redraw May 27 03:15:30.867658 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 03:15:30.867675 kernel: Console: switching to colour frame buffer device 160x50 May 27 03:15:30.867705 kernel: fb0: EFI VGA frame buffer device May 27 03:15:30.867726 kernel: pstore: Using crash dump compression: deflate May 27 03:15:30.867740 kernel: pstore: Registered efi_pstore as persistent store backend May 27 03:15:30.867748 kernel: NET: Registered PF_INET6 protocol family May 27 03:15:30.867764 kernel: Segment Routing with IPv6 May 27 03:15:30.867784 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:15:30.867797 kernel: NET: Registered PF_PACKET protocol family May 27 03:15:30.867805 kernel: Key type dns_resolver registered May 27 03:15:30.867814 kernel: IPI shorthand broadcast: enabled May 27 03:15:30.867832 kernel: sched_clock: Marking stable (3725002097, 143849013)->(3961503649, -92652539) May 27 03:15:30.867840 kernel: registered taskstats version 1 May 27 03:15:30.867856 kernel: Loading compiled-in X.509 certificates May 27 03:15:30.867870 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:15:30.867886 kernel: Demotion targets for Node 0: null May 27 03:15:30.867902 kernel: Key type .fscrypt registered May 27 03:15:30.867925 kernel: Key type fscrypt-provisioning registered May 27 03:15:30.867935 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:15:30.867943 kernel: ima: Allocated hash algorithm: sha1 May 27 03:15:30.867957 kernel: ima: No architecture policies found May 27 03:15:30.867981 kernel: clk: Disabling unused clocks May 27 03:15:30.867990 kernel: Warning: unable to open an initial console. May 27 03:15:30.867999 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:15:30.868007 kernel: Write protecting the kernel read-only data: 24576k May 27 03:15:30.868019 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:15:30.868027 kernel: Run /init as init process May 27 03:15:30.868036 kernel: with arguments: May 27 03:15:30.868044 kernel: /init May 27 03:15:30.868052 kernel: with environment: May 27 03:15:30.868060 kernel: HOME=/ May 27 03:15:30.868068 kernel: TERM=linux May 27 03:15:30.868077 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:15:30.868086 systemd[1]: Successfully made /usr/ read-only. May 27 03:15:30.868137 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:15:30.868147 systemd[1]: Detected virtualization kvm. May 27 03:15:30.868156 systemd[1]: Detected architecture x86-64. May 27 03:15:30.868164 systemd[1]: Running in initrd. May 27 03:15:30.868173 systemd[1]: No hostname configured, using default hostname. May 27 03:15:30.868182 systemd[1]: Hostname set to . May 27 03:15:30.868191 systemd[1]: Initializing machine ID from VM UUID. May 27 03:15:30.868204 systemd[1]: Queued start job for default target initrd.target. May 27 03:15:30.868215 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:15:30.868224 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:15:30.868234 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:15:30.868243 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:15:30.868251 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:15:30.868261 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:15:30.868274 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:15:30.868283 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:15:30.868291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:15:30.868307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:15:30.868326 systemd[1]: Reached target paths.target - Path Units. May 27 03:15:30.868339 systemd[1]: Reached target slices.target - Slice Units. May 27 03:15:30.868355 systemd[1]: Reached target swap.target - Swaps. May 27 03:15:30.868364 systemd[1]: Reached target timers.target - Timer Units. May 27 03:15:30.868376 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:15:30.868384 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:15:30.868393 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:15:30.868402 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:15:30.868411 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:15:30.868419 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:15:30.868428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:15:30.868437 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:15:30.868446 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:15:30.868457 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:15:30.868465 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:15:30.868475 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:15:30.868484 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:15:30.868493 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:15:30.868503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:15:30.868512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:30.868521 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:15:30.868533 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:15:30.868541 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:15:30.868550 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:15:30.868586 systemd-journald[220]: Collecting audit messages is disabled. May 27 03:15:30.868616 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:30.868626 systemd-journald[220]: Journal started May 27 03:15:30.868647 systemd-journald[220]: Runtime Journal (/run/log/journal/33e018d7c2734c5191ca446adcbcc95e) is 6M, max 48.2M, 42.2M free. May 27 03:15:30.857736 systemd-modules-load[221]: Inserted module 'overlay' May 27 03:15:30.870988 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:15:30.871905 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:15:30.878344 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:15:30.882112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:15:30.888406 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:15:30.890924 systemd-modules-load[221]: Inserted module 'br_netfilter' May 27 03:15:30.891848 kernel: Bridge firewalling registered May 27 03:15:30.893624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:15:30.894840 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:15:30.899605 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:15:30.909586 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:15:30.914341 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:15:30.915485 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:15:30.917126 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:15:30.919252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:15:30.922770 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:15:30.933946 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:15:30.947407 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:15:30.980730 systemd-resolved[262]: Positive Trust Anchors: May 27 03:15:30.980747 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:15:30.980777 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:15:30.983619 systemd-resolved[262]: Defaulting to hostname 'linux'. May 27 03:15:30.985164 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:15:30.991023 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:15:31.057006 kernel: SCSI subsystem initialized May 27 03:15:31.065992 kernel: Loading iSCSI transport class v2.0-870. May 27 03:15:31.076992 kernel: iscsi: registered transport (tcp) May 27 03:15:31.098993 kernel: iscsi: registered transport (qla4xxx) May 27 03:15:31.099021 kernel: QLogic iSCSI HBA Driver May 27 03:15:31.121144 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:15:31.142661 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:15:31.146292 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:15:31.208380 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:15:31.210332 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:15:31.279009 kernel: raid6: avx2x4 gen() 30260 MB/s May 27 03:15:31.295990 kernel: raid6: avx2x2 gen() 30127 MB/s May 27 03:15:31.313097 kernel: raid6: avx2x1 gen() 24131 MB/s May 27 03:15:31.313151 kernel: raid6: using algorithm avx2x4 gen() 30260 MB/s May 27 03:15:31.331109 kernel: raid6: .... xor() 7056 MB/s, rmw enabled May 27 03:15:31.331179 kernel: raid6: using avx2x2 recovery algorithm May 27 03:15:31.352011 kernel: xor: automatically using best checksumming function avx May 27 03:15:31.580019 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:15:31.589722 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:15:31.593149 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:15:31.622897 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 27 03:15:31.628463 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:15:31.632023 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:15:31.667010 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation May 27 03:15:31.697190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:15:31.701045 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:15:31.787540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:15:31.790499 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:15:31.830991 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 03:15:31.834427 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 03:15:31.842028 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:15:31.863996 kernel: libata version 3.00 loaded. May 27 03:15:31.869180 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 03:15:31.874013 kernel: AES CTR mode by8 optimization enabled May 27 03:15:31.877915 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:15:31.877955 kernel: GPT:9289727 != 19775487 May 27 03:15:31.878007 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:15:31.878034 kernel: GPT:9289727 != 19775487 May 27 03:15:31.878052 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:15:31.878074 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:15:31.885255 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:15:31.885485 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:15:31.886323 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:15:31.896645 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:15:31.896849 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:15:31.896109 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:15:31.896241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:31.906129 kernel: scsi host0: ahci May 27 03:15:31.906364 kernel: scsi host1: ahci May 27 03:15:31.899490 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:31.911357 kernel: scsi host2: ahci May 27 03:15:31.907246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:31.913950 kernel: scsi host3: ahci May 27 03:15:31.910675 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:15:31.927984 kernel: scsi host4: ahci May 27 03:15:31.934929 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 03:15:31.943999 kernel: scsi host5: ahci May 27 03:15:31.944229 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 27 03:15:31.944243 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 27 03:15:31.944253 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 27 03:15:31.944264 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 27 03:15:31.944274 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 27 03:15:31.944285 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 27 03:15:31.943237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:15:31.943365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:31.973618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 03:15:31.982525 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:15:31.989539 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 03:15:31.989960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 03:15:31.995278 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:15:31.998765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:32.027137 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:32.041451 disk-uuid[629]: Primary Header is updated. May 27 03:15:32.041451 disk-uuid[629]: Secondary Entries is updated. May 27 03:15:32.041451 disk-uuid[629]: Secondary Header is updated. May 27 03:15:32.045992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:15:32.049987 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:15:32.252986 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:15:32.253061 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 03:15:32.253073 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:15:32.253083 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:15:32.254018 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 03:15:32.254108 kernel: ata3.00: applying bridge limits May 27 03:15:32.255003 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:15:32.256003 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:15:32.257003 kernel: ata3.00: configured for UDMA/100 May 27 03:15:32.258990 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 03:15:32.310036 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 03:15:32.310395 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 03:15:32.331023 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 03:15:32.650426 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:15:32.651345 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:15:32.651826 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:15:32.652158 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:15:32.653400 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:15:32.677285 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:15:33.050078 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:15:33.051810 disk-uuid[634]: The operation has completed successfully. May 27 03:15:33.113526 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:15:33.113647 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:15:33.115663 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:15:33.133335 sh[663]: Success May 27 03:15:33.151690 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:15:33.151768 kernel: device-mapper: uevent: version 1.0.3 May 27 03:15:33.151790 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:15:33.161007 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:15:33.191951 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:15:33.195825 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:15:33.212196 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:15:33.219412 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:15:33.219440 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (675) May 27 03:15:33.220881 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:15:33.221817 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:15:33.221833 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:15:33.226945 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:15:33.229083 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:15:33.231323 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:15:33.233909 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:15:33.236711 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:15:33.259026 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (707) May 27 03:15:33.261324 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:15:33.261354 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:15:33.261366 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:15:33.269007 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:15:33.269890 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:15:33.272128 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:15:33.602821 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:15:33.608434 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:15:33.620231 ignition[751]: Ignition 2.21.0 May 27 03:15:33.620250 ignition[751]: Stage: fetch-offline May 27 03:15:33.620330 ignition[751]: no configs at "/usr/lib/ignition/base.d" May 27 03:15:33.620344 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:33.620532 ignition[751]: parsed url from cmdline: "" May 27 03:15:33.620537 ignition[751]: no config URL provided May 27 03:15:33.620545 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:15:33.620557 ignition[751]: no config at "/usr/lib/ignition/user.ign" May 27 03:15:33.620594 ignition[751]: op(1): [started] loading QEMU firmware config module May 27 03:15:33.620602 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 03:15:33.632757 ignition[751]: op(1): [finished] loading QEMU firmware config module May 27 03:15:33.657127 systemd-networkd[852]: lo: Link UP May 27 03:15:33.657147 systemd-networkd[852]: lo: Gained carrier May 27 03:15:33.660137 systemd-networkd[852]: Enumeration completed May 27 03:15:33.660312 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:15:33.660795 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:15:33.660801 systemd-networkd[852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:15:33.662769 systemd[1]: Reached target network.target - Network. May 27 03:15:33.663039 systemd-networkd[852]: eth0: Link UP May 27 03:15:33.663050 systemd-networkd[852]: eth0: Gained carrier May 27 03:15:33.663069 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:15:33.693340 ignition[751]: parsing config with SHA512: 54df3846fbd746e94404132c73545c085416b88c14ec3c48881bc9ec1a3c6ad85e3c69b3b4b1aa858d507a341cadaa5e8b431782e01f01cb97be320c73a5d10b May 27 03:15:33.699197 unknown[751]: fetched base config from "system" May 27 03:15:33.699208 unknown[751]: fetched user config from "qemu" May 27 03:15:33.699767 ignition[751]: fetch-offline: fetch-offline passed May 27 03:15:33.699854 ignition[751]: Ignition finished successfully May 27 03:15:33.702825 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:15:33.703020 systemd-networkd[852]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:15:33.705153 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 03:15:33.706076 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:15:33.787613 ignition[859]: Ignition 2.21.0 May 27 03:15:33.787627 ignition[859]: Stage: kargs May 27 03:15:33.787751 ignition[859]: no configs at "/usr/lib/ignition/base.d" May 27 03:15:33.787762 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:33.789795 ignition[859]: kargs: kargs passed May 27 03:15:33.790085 ignition[859]: Ignition finished successfully May 27 03:15:33.795351 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:15:33.798401 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:15:33.840403 ignition[867]: Ignition 2.21.0 May 27 03:15:33.840416 ignition[867]: Stage: disks May 27 03:15:33.840558 ignition[867]: no configs at "/usr/lib/ignition/base.d" May 27 03:15:33.840569 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:33.841461 ignition[867]: disks: disks passed May 27 03:15:33.844310 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:15:33.841502 ignition[867]: Ignition finished successfully May 27 03:15:33.845754 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:15:33.847604 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:15:33.849605 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:15:33.850230 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:15:33.850548 systemd[1]: Reached target basic.target - Basic System. May 27 03:15:33.851933 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:15:33.888916 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:15:33.896539 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:15:33.897818 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:15:34.084990 kernel: EXT4-fs (vda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:15:34.085382 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:15:34.086260 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:15:34.088611 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:15:34.091101 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:15:34.091759 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:15:34.091806 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:15:34.091828 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:15:34.116099 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (886) May 27 03:15:34.116157 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:15:34.117193 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:15:34.121001 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:15:34.121025 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:15:34.122221 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:15:34.123713 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:15:34.175701 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:15:34.181802 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory May 27 03:15:34.186282 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:15:34.190215 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:15:34.361396 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:15:34.364317 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:15:34.365574 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:15:34.387098 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:15:34.388720 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:15:34.401579 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:15:34.496538 ignition[1001]: INFO : Ignition 2.21.0 May 27 03:15:34.496538 ignition[1001]: INFO : Stage: mount May 27 03:15:34.498611 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:15:34.498611 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:34.500909 ignition[1001]: INFO : mount: mount passed May 27 03:15:34.500909 ignition[1001]: INFO : Ignition finished successfully May 27 03:15:34.502488 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:15:34.504812 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:15:34.532914 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:15:34.560480 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1013) May 27 03:15:34.560527 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:15:34.560542 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:15:34.561357 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:15:34.565466 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:15:34.601918 ignition[1030]: INFO : Ignition 2.21.0 May 27 03:15:34.601918 ignition[1030]: INFO : Stage: files May 27 03:15:34.604143 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:15:34.604143 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:34.606855 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping May 27 03:15:34.606855 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:15:34.606855 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:15:34.611421 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:15:34.611421 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:15:34.611421 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:15:34.611421 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:15:34.611421 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 03:15:34.609537 unknown[1030]: wrote ssh authorized keys file for user: core May 27 03:15:34.693116 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:15:34.785381 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:15:34.785381 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:15:34.790083 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:15:34.790083 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:15:34.790083 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:15:34.790083 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:15:34.790083 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:15:34.790083 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:15:34.790083 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:15:34.811428 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:15:34.814097 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:15:34.814097 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:15:34.819117 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:15:34.819117 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:15:34.819117 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 03:15:35.385471 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:15:35.612237 systemd-networkd[852]: eth0: Gained IPv6LL May 27 03:15:35.980623 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:15:35.980623 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:15:35.984550 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:15:35.991000 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:15:35.991000 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:15:35.991000 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 03:15:35.995359 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:15:35.997273 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:15:35.997273 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 03:15:35.997273 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 03:15:36.020857 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:15:36.026575 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:15:36.028377 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 03:15:36.028377 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 03:15:36.031629 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:15:36.031629 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:15:36.031629 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:15:36.031629 ignition[1030]: INFO : files: files passed May 27 03:15:36.031629 ignition[1030]: INFO : Ignition finished successfully May 27 03:15:36.032060 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:15:36.033830 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:15:36.037276 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:15:36.051299 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:15:36.051706 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:15:36.054444 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory May 27 03:15:36.058700 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:15:36.060775 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:15:36.063627 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:15:36.062048 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:15:36.063794 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:15:36.066363 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:15:36.107719 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:15:36.107867 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:15:36.108628 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:15:36.111375 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:15:36.111757 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:15:36.116378 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:15:36.148076 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:15:36.150051 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:15:36.171219 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:15:36.173709 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:15:36.174337 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:15:36.174656 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:15:36.174819 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:15:36.178802 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:15:36.179337 systemd[1]: Stopped target basic.target - Basic System. May 27 03:15:36.179693 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:15:36.180037 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:15:36.180563 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:15:36.180907 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:15:36.181462 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:15:36.181772 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:15:36.182284 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:15:36.182674 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:15:36.182988 systemd[1]: Stopped target swap.target - Swaps. May 27 03:15:36.183494 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:15:36.183652 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:15:36.206236 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:15:36.207465 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:15:36.207751 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:15:36.207924 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:15:36.212180 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:15:36.212376 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:15:36.216583 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:15:36.216748 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:15:36.217671 systemd[1]: Stopped target paths.target - Path Units. May 27 03:15:36.221088 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:15:36.221342 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:15:36.222549 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:15:36.225908 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:15:36.228521 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:15:36.228672 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:15:36.230893 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:15:36.231024 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:15:36.233096 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:15:36.233343 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:15:36.235101 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:15:36.235220 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:15:36.241717 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:15:36.242459 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:15:36.242661 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:15:36.246129 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:15:36.248581 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:15:36.248707 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:15:36.250384 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:15:36.250494 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:15:36.260388 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:15:36.264147 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:15:36.286770 ignition[1085]: INFO : Ignition 2.21.0 May 27 03:15:36.286770 ignition[1085]: INFO : Stage: umount May 27 03:15:36.286770 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:15:36.286770 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:36.291364 ignition[1085]: INFO : umount: umount passed May 27 03:15:36.291364 ignition[1085]: INFO : Ignition finished successfully May 27 03:15:36.289524 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:15:36.293809 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:15:36.293939 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:15:36.295053 systemd[1]: Stopped target network.target - Network. May 27 03:15:36.296542 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:15:36.296636 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:15:36.298325 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:15:36.298386 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:15:36.300215 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:15:36.300270 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:15:36.300523 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:15:36.300565 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:15:36.300999 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:15:36.301408 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:15:36.310921 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:15:36.311078 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:15:36.316324 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:15:36.317717 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:15:36.317796 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:15:36.323052 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:15:36.330393 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:15:36.330548 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:15:36.334309 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:15:36.334481 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:15:36.334798 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:15:36.334836 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:15:36.340592 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:15:36.341229 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:15:36.341281 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:15:36.341604 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:15:36.341652 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:15:36.346691 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:15:36.346753 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:15:36.347276 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:15:36.350652 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:15:36.367277 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:15:36.367466 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:15:36.371426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:15:36.371504 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:15:36.372314 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:15:36.372349 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:15:36.372613 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:15:36.372667 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:15:36.373435 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:15:36.373484 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:15:36.374280 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:15:36.374341 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:15:36.375811 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:15:36.385535 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:15:36.385595 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:15:36.391598 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:15:36.391656 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:15:36.395222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:15:36.395281 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:36.398296 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:15:36.398419 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:15:36.399204 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:15:36.399311 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:15:36.480250 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:15:36.480428 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:15:36.481284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:15:36.483444 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:15:36.483501 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:15:36.488038 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:15:36.509792 systemd[1]: Switching root. May 27 03:15:36.542665 systemd-journald[220]: Journal stopped May 27 03:15:37.816821 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 27 03:15:37.816889 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:15:37.816904 kernel: SELinux: policy capability open_perms=1 May 27 03:15:37.816922 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:15:37.816934 kernel: SELinux: policy capability always_check_network=0 May 27 03:15:37.816945 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:15:37.816984 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:15:37.817001 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:15:37.817012 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:15:37.817028 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:15:37.817040 kernel: audit: type=1403 audit(1748315736.985:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:15:37.817057 systemd[1]: Successfully loaded SELinux policy in 43.026ms. May 27 03:15:37.817073 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.093ms. May 27 03:15:37.817092 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:15:37.817104 systemd[1]: Detected virtualization kvm. May 27 03:15:37.817116 systemd[1]: Detected architecture x86-64. May 27 03:15:37.817128 systemd[1]: Detected first boot. May 27 03:15:37.817140 systemd[1]: Initializing machine ID from VM UUID. May 27 03:15:37.817152 zram_generator::config[1131]: No configuration found. May 27 03:15:37.817165 kernel: Guest personality initialized and is inactive May 27 03:15:37.817185 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:15:37.817201 kernel: Initialized host personality May 27 03:15:37.817213 kernel: NET: Registered PF_VSOCK protocol family May 27 03:15:37.817225 systemd[1]: Populated /etc with preset unit settings. May 27 03:15:37.817238 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:15:37.817250 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:15:37.817262 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:15:37.817274 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:15:37.817292 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:15:37.817305 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:15:37.817321 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:15:37.817333 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:15:37.817345 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:15:37.817357 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:15:37.817370 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:15:37.817382 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:15:37.817393 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:15:37.817410 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:15:37.817422 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:15:37.817434 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:15:37.817447 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:15:37.817465 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:15:37.817477 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:15:37.817489 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:15:37.817501 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:15:37.817518 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:15:37.817530 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:15:37.817543 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:15:37.817555 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:15:37.817566 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:15:37.817578 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:15:37.817595 systemd[1]: Reached target slices.target - Slice Units. May 27 03:15:37.817607 systemd[1]: Reached target swap.target - Swaps. May 27 03:15:37.817619 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:15:37.817636 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:15:37.817648 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:15:37.817660 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:15:37.817672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:15:37.817684 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:15:37.817695 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:15:37.817716 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:15:37.817729 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:15:37.817741 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:15:37.817758 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:37.817770 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:15:37.817782 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:15:37.817794 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:15:37.817806 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:15:37.817819 systemd[1]: Reached target machines.target - Containers. May 27 03:15:37.817830 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:15:37.817843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:15:37.817859 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:15:37.817875 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:15:37.817887 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:15:37.817899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:15:37.817911 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:15:37.817925 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:15:37.817940 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:15:37.817957 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:15:37.817990 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:15:37.818009 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:15:37.818021 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:15:37.818039 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:15:37.818052 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:15:37.818064 kernel: fuse: init (API version 7.41) May 27 03:15:37.818076 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:15:37.818087 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:15:37.818099 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:15:37.818111 kernel: ACPI: bus type drm_connector registered May 27 03:15:37.818128 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:15:37.818140 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:15:37.818151 kernel: loop: module loaded May 27 03:15:37.818163 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:15:37.818189 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:15:37.818203 systemd[1]: Stopped verity-setup.service. May 27 03:15:37.818217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:37.818229 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:15:37.818242 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:15:37.818254 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:15:37.818276 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:15:37.818312 systemd-journald[1206]: Collecting audit messages is disabled. May 27 03:15:37.818340 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:15:37.818353 systemd-journald[1206]: Journal started May 27 03:15:37.818375 systemd-journald[1206]: Runtime Journal (/run/log/journal/33e018d7c2734c5191ca446adcbcc95e) is 6M, max 48.2M, 42.2M free. May 27 03:15:37.546164 systemd[1]: Queued start job for default target multi-user.target. May 27 03:15:37.565224 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 03:15:37.565761 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:15:37.822399 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:15:37.823273 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:15:37.824855 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:15:37.826429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:15:37.828067 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:15:37.828409 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:15:37.830097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:15:37.830316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:15:37.831838 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:15:37.832098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:15:37.833497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:15:37.833732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:15:37.835274 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:15:37.835495 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:15:37.837037 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:15:37.837259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:15:37.838724 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:15:37.840272 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:15:37.841936 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:15:37.843665 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:15:37.861696 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:15:37.864641 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:15:37.867427 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:15:37.868753 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:15:37.868892 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:15:37.871262 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:15:37.877275 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:15:37.880483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:15:37.882116 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:15:37.888094 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:15:37.889410 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:15:37.898680 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:15:37.899893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:15:37.901121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:15:37.905168 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:15:37.910250 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:15:37.914366 systemd-journald[1206]: Time spent on flushing to /var/log/journal/33e018d7c2734c5191ca446adcbcc95e is 21.331ms for 1035 entries. May 27 03:15:37.914366 systemd-journald[1206]: System Journal (/var/log/journal/33e018d7c2734c5191ca446adcbcc95e) is 8M, max 195.6M, 187.6M free. May 27 03:15:37.950478 systemd-journald[1206]: Received client request to flush runtime journal. May 27 03:15:37.950514 kernel: loop0: detected capacity change from 0 to 229808 May 27 03:15:37.916132 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:15:37.918353 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:15:37.918677 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:15:37.927577 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:15:37.929533 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:15:37.932957 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:15:37.937365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:15:37.952680 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:15:37.967267 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:15:37.970797 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:15:37.982725 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:15:37.986204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:15:37.986990 kernel: loop1: detected capacity change from 0 to 113872 May 27 03:15:38.020027 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 27 03:15:38.020049 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 27 03:15:38.027132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:15:38.081073 kernel: loop2: detected capacity change from 0 to 146240 May 27 03:15:38.122017 kernel: loop3: detected capacity change from 0 to 229808 May 27 03:15:38.133983 kernel: loop4: detected capacity change from 0 to 113872 May 27 03:15:38.143988 kernel: loop5: detected capacity change from 0 to 146240 May 27 03:15:38.159643 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 03:15:38.160301 (sd-merge)[1272]: Merged extensions into '/usr'. May 27 03:15:38.165460 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:15:38.165478 systemd[1]: Reloading... May 27 03:15:38.253342 zram_generator::config[1301]: No configuration found. May 27 03:15:38.452680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:15:38.496997 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:15:38.558857 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:15:38.559769 systemd[1]: Reloading finished in 393 ms. May 27 03:15:38.597398 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:15:38.599129 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:15:38.626256 systemd[1]: Starting ensure-sysext.service... May 27 03:15:38.628571 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:15:38.644678 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 27 03:15:38.644706 systemd[1]: Reloading... May 27 03:15:38.657183 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:15:38.657221 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:15:38.657499 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:15:38.657762 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:15:38.659204 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:15:38.659598 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 27 03:15:38.659738 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 27 03:15:38.664541 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:15:38.664639 systemd-tmpfiles[1337]: Skipping /boot May 27 03:15:38.685156 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:15:38.686142 systemd-tmpfiles[1337]: Skipping /boot May 27 03:15:38.783527 zram_generator::config[1397]: No configuration found. May 27 03:15:38.905351 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:15:38.993277 systemd[1]: Reloading finished in 348 ms. May 27 03:15:39.016414 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:15:39.046366 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:15:39.055496 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:15:39.058071 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:15:39.067067 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:15:39.071563 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:15:39.076183 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:15:39.080179 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:15:39.084330 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:39.084513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:15:39.092049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:15:39.096163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:15:39.099259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:15:39.101191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:15:39.101368 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:15:39.105308 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:15:39.106560 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:39.108462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:15:39.112148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:15:39.113996 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:15:39.117422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:15:39.117719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:15:39.123408 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:15:39.123766 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:15:39.129073 systemd-udevd[1408]: Using default interface naming scheme 'v255'. May 27 03:15:39.133573 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:39.133817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:15:39.137322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:15:39.139638 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:15:39.142062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:15:39.145198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:15:39.145317 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:15:39.148090 augenrules[1437]: No rules May 27 03:15:39.154948 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:15:39.156083 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:39.157577 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:15:39.157845 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:15:39.159884 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:15:39.162300 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:15:39.164553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:15:39.164847 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:15:39.167692 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:15:39.169070 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:15:39.171011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:15:39.171237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:15:39.172979 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:15:39.173242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:15:39.175456 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:15:39.199467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:39.205396 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:15:39.206921 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:15:39.208557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:15:39.214230 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:15:39.222384 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:15:39.225999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:15:39.227227 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:15:39.227444 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:15:39.233330 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:15:39.235103 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:15:39.235204 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:39.236742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:15:39.238246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:15:39.240293 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:15:39.240573 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:15:39.244241 augenrules[1484]: /sbin/augenrules: No change May 27 03:15:39.247159 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:15:39.248108 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:15:39.265313 systemd[1]: Finished ensure-sysext.service. May 27 03:15:39.267552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:15:39.269043 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:15:39.294363 augenrules[1519]: No rules May 27 03:15:39.297897 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:15:39.299136 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:15:39.315462 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:15:39.318566 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:15:39.347373 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:15:39.348617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:15:39.348697 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:15:39.352850 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:15:39.376022 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:15:39.385989 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:15:39.400003 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 03:15:39.404987 kernel: ACPI: button: Power Button [PWRF] May 27 03:15:39.444071 systemd-networkd[1493]: lo: Link UP May 27 03:15:39.444083 systemd-networkd[1493]: lo: Gained carrier May 27 03:15:39.446260 systemd-networkd[1493]: Enumeration completed May 27 03:15:39.446359 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:15:39.449028 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:15:39.449117 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:15:39.449575 systemd-networkd[1493]: eth0: Link UP May 27 03:15:39.449779 systemd-networkd[1493]: eth0: Gained carrier May 27 03:15:39.449793 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:15:39.450545 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:15:39.455108 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:15:39.460024 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 27 03:15:39.460305 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:15:39.461160 systemd-resolved[1406]: Positive Trust Anchors: May 27 03:15:39.461183 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:15:39.461217 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:15:39.461658 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:15:39.463070 systemd-networkd[1493]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:15:39.470541 systemd-resolved[1406]: Defaulting to hostname 'linux'. May 27 03:15:39.474193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:15:39.475581 systemd[1]: Reached target network.target - Network. May 27 03:15:39.477053 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:15:39.478442 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:15:40.852821 systemd-resolved[1406]: Clock change detected. Flushing caches. May 27 03:15:40.853126 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:15:40.854026 systemd-timesyncd[1528]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 03:15:40.855325 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:15:40.856766 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:15:40.858623 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:15:40.859004 systemd-timesyncd[1528]: Initial clock synchronization to Tue 2025-05-27 03:15:40.852774 UTC. May 27 03:15:40.860633 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:15:40.862616 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:15:40.862640 systemd[1]: Reached target paths.target - Path Units. May 27 03:15:40.863793 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:15:40.865008 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:15:40.867745 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:15:40.869914 systemd[1]: Reached target timers.target - Timer Units. May 27 03:15:40.872585 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:15:40.876278 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:15:40.886682 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:15:40.889819 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:15:40.891105 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:15:40.901771 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:15:40.906192 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:15:40.908672 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:15:40.911436 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:15:40.916112 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:15:40.917650 systemd[1]: Reached target basic.target - Basic System. May 27 03:15:40.919783 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:15:40.919826 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:15:40.923791 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:15:40.927516 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:15:40.938757 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:15:40.943781 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:15:40.951920 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:15:40.953641 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:15:40.955832 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:15:40.961655 jq[1555]: false May 27 03:15:40.962188 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:15:40.967727 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:15:40.972717 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:15:40.981783 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:15:40.997409 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing passwd entry cache May 27 03:15:40.997438 oslogin_cache_refresh[1559]: Refreshing passwd entry cache May 27 03:15:41.022187 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:15:41.025237 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:15:41.026106 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:15:41.027099 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:15:41.031765 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:15:41.039389 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:15:41.041432 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:15:41.041769 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:15:41.049669 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting users, quitting May 27 03:15:41.049669 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:15:41.049669 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing group entry cache May 27 03:15:41.048812 oslogin_cache_refresh[1559]: Failure getting users, quitting May 27 03:15:41.048854 oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:15:41.048961 oslogin_cache_refresh[1559]: Refreshing group entry cache May 27 03:15:41.057340 jq[1570]: true May 27 03:15:41.059217 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting groups, quitting May 27 03:15:41.059212 oslogin_cache_refresh[1559]: Failure getting groups, quitting May 27 03:15:41.059385 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:15:41.059238 oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:15:41.072035 extend-filesystems[1557]: Found loop3 May 27 03:15:41.082867 extend-filesystems[1557]: Found loop4 May 27 03:15:41.082867 extend-filesystems[1557]: Found loop5 May 27 03:15:41.082867 extend-filesystems[1557]: Found sr0 May 27 03:15:41.082867 extend-filesystems[1557]: Found vda May 27 03:15:41.082867 extend-filesystems[1557]: Found vda1 May 27 03:15:41.082867 extend-filesystems[1557]: Found vda2 May 27 03:15:41.082867 extend-filesystems[1557]: Found vda3 May 27 03:15:41.082867 extend-filesystems[1557]: Found usr May 27 03:15:41.082867 extend-filesystems[1557]: Found vda4 May 27 03:15:41.082867 extend-filesystems[1557]: Found vda6 May 27 03:15:41.082867 extend-filesystems[1557]: Found vda7 May 27 03:15:41.082867 extend-filesystems[1557]: Found vda9 May 27 03:15:41.082867 extend-filesystems[1557]: Checking size of /dev/vda9 May 27 03:15:41.143003 update_engine[1568]: I20250527 03:15:41.098969 1568 main.cc:92] Flatcar Update Engine starting May 27 03:15:41.149782 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:15:41.150087 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:15:41.151697 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:15:41.153132 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:15:41.157634 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:15:41.158107 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:15:41.180027 (ntainerd)[1578]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:15:41.187409 extend-filesystems[1557]: Resized partition /dev/vda9 May 27 03:15:41.190503 extend-filesystems[1590]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:15:41.191831 jq[1577]: true May 27 03:15:41.196178 kernel: kvm_amd: TSC scaling supported May 27 03:15:41.196214 kernel: kvm_amd: Nested Virtualization enabled May 27 03:15:41.196228 kernel: kvm_amd: Nested Paging enabled May 27 03:15:41.196240 kernel: kvm_amd: LBR virtualization supported May 27 03:15:41.197650 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 03:15:41.197677 kernel: kvm_amd: Virtual GIF supported May 27 03:15:41.214989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:41.223572 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:15:41.224917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:41.229986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:41.242568 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 03:15:41.279849 systemd-logind[1565]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:15:41.279881 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:15:41.282189 systemd-logind[1565]: New seat seat0. May 27 03:15:41.284501 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:15:41.286940 tar[1574]: linux-amd64/LICENSE May 27 03:15:41.311403 bash[1613]: Updated "/home/core/.ssh/authorized_keys" May 27 03:15:41.312130 dbus-daemon[1553]: [system] SELinux support is enabled May 27 03:15:41.317603 update_engine[1568]: I20250527 03:15:41.317521 1568 update_check_scheduler.cc:74] Next update check in 9m52s May 27 03:15:41.325900 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:15:41.328913 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:15:41.330699 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 03:15:41.330826 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:15:41.330856 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:15:41.331130 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:15:41.331155 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:15:41.333584 tar[1574]: linux-amd64/helm May 27 03:15:41.334568 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 03:15:41.359641 kernel: EDAC MC: Ver: 3.0.0 May 27 03:15:41.342436 dbus-daemon[1553]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 03:15:41.340293 systemd[1]: Started update-engine.service - Update Engine. May 27 03:15:41.342984 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:15:41.359949 extend-filesystems[1590]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 03:15:41.359949 extend-filesystems[1590]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 03:15:41.359949 extend-filesystems[1590]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 03:15:41.363712 extend-filesystems[1557]: Resized filesystem in /dev/vda9 May 27 03:15:41.368170 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:15:41.368542 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:15:41.411404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:41.418377 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:15:41.624634 containerd[1578]: time="2025-05-27T03:15:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:15:41.627963 containerd[1578]: time="2025-05-27T03:15:41.627910370Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:15:41.642635 containerd[1578]: time="2025-05-27T03:15:41.642447998Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="22.933µs" May 27 03:15:41.642761 containerd[1578]: time="2025-05-27T03:15:41.642738904Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:15:41.642832 containerd[1578]: time="2025-05-27T03:15:41.642818573Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:15:41.643073 containerd[1578]: time="2025-05-27T03:15:41.643054496Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:15:41.643147 containerd[1578]: time="2025-05-27T03:15:41.643132723Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:15:41.643226 containerd[1578]: time="2025-05-27T03:15:41.643212362Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:15:41.643370 containerd[1578]: time="2025-05-27T03:15:41.643351152Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:15:41.643424 containerd[1578]: time="2025-05-27T03:15:41.643412307Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:15:41.643782 containerd[1578]: time="2025-05-27T03:15:41.643761452Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:15:41.643852 containerd[1578]: time="2025-05-27T03:15:41.643838286Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:15:41.643903 containerd[1578]: time="2025-05-27T03:15:41.643891015Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:15:41.643949 containerd[1578]: time="2025-05-27T03:15:41.643937963Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:15:41.644288 containerd[1578]: time="2025-05-27T03:15:41.644270457Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:15:41.644640 containerd[1578]: time="2025-05-27T03:15:41.644616806Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:15:41.644723 containerd[1578]: time="2025-05-27T03:15:41.644707096Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:15:41.644771 containerd[1578]: time="2025-05-27T03:15:41.644759133Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:15:41.644857 containerd[1578]: time="2025-05-27T03:15:41.644843842Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:15:41.645149 containerd[1578]: time="2025-05-27T03:15:41.645130600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:15:41.645301 containerd[1578]: time="2025-05-27T03:15:41.645284348Z" level=info msg="metadata content store policy set" policy=shared May 27 03:15:41.652350 containerd[1578]: time="2025-05-27T03:15:41.652288015Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:15:41.652456 containerd[1578]: time="2025-05-27T03:15:41.652431224Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:15:41.652487 containerd[1578]: time="2025-05-27T03:15:41.652459567Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:15:41.652487 containerd[1578]: time="2025-05-27T03:15:41.652473203Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:15:41.652523 containerd[1578]: time="2025-05-27T03:15:41.652485816Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:15:41.652523 containerd[1578]: time="2025-05-27T03:15:41.652504752Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:15:41.652585 containerd[1578]: time="2025-05-27T03:15:41.652571988Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:15:41.652606 containerd[1578]: time="2025-05-27T03:15:41.652587507Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:15:41.652606 containerd[1578]: time="2025-05-27T03:15:41.652598708Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:15:41.652646 containerd[1578]: time="2025-05-27T03:15:41.652608707Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:15:41.652646 containerd[1578]: time="2025-05-27T03:15:41.652618415Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:15:41.652646 containerd[1578]: time="2025-05-27T03:15:41.652636038Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:15:41.652861 containerd[1578]: time="2025-05-27T03:15:41.652838127Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:15:41.652903 containerd[1578]: time="2025-05-27T03:15:41.652867823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:15:41.652903 containerd[1578]: time="2025-05-27T03:15:41.652888882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:15:41.652940 containerd[1578]: time="2025-05-27T03:15:41.652910503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:15:41.652940 containerd[1578]: time="2025-05-27T03:15:41.652931973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:15:41.652976 containerd[1578]: time="2025-05-27T03:15:41.652951309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:15:41.652976 containerd[1578]: time="2025-05-27T03:15:41.652964013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:15:41.652976 containerd[1578]: time="2025-05-27T03:15:41.652974773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:15:41.653039 containerd[1578]: time="2025-05-27T03:15:41.652986716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:15:41.653039 containerd[1578]: time="2025-05-27T03:15:41.652997927Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:15:41.653039 containerd[1578]: time="2025-05-27T03:15:41.653008898Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:15:41.653252 containerd[1578]: time="2025-05-27T03:15:41.653109556Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:15:41.653252 containerd[1578]: time="2025-05-27T03:15:41.653200858Z" level=info msg="Start snapshots syncer" May 27 03:15:41.653252 containerd[1578]: time="2025-05-27T03:15:41.653230183Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:15:41.653538 containerd[1578]: time="2025-05-27T03:15:41.653492875Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:15:41.653719 containerd[1578]: time="2025-05-27T03:15:41.653659448Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:15:41.654682 containerd[1578]: time="2025-05-27T03:15:41.654658011Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:15:41.654829 containerd[1578]: time="2025-05-27T03:15:41.654803684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:15:41.654861 containerd[1578]: time="2025-05-27T03:15:41.654837718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:15:41.654861 containerd[1578]: time="2025-05-27T03:15:41.654849790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:15:41.654912 containerd[1578]: time="2025-05-27T03:15:41.654860541Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:15:41.654912 containerd[1578]: time="2025-05-27T03:15:41.654875519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:15:41.654912 containerd[1578]: time="2025-05-27T03:15:41.654885808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:15:41.654912 containerd[1578]: time="2025-05-27T03:15:41.654896939Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:15:41.654982 containerd[1578]: time="2025-05-27T03:15:41.654932045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:15:41.654982 containerd[1578]: time="2025-05-27T03:15:41.654945099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:15:41.654982 containerd[1578]: time="2025-05-27T03:15:41.654955639Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:15:41.655039 containerd[1578]: time="2025-05-27T03:15:41.655007697Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:15:41.655039 containerd[1578]: time="2025-05-27T03:15:41.655023597Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:15:41.655039 containerd[1578]: time="2025-05-27T03:15:41.655032914Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:15:41.655288 containerd[1578]: time="2025-05-27T03:15:41.655043354Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:15:41.655288 containerd[1578]: time="2025-05-27T03:15:41.655127311Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:15:41.655288 containerd[1578]: time="2025-05-27T03:15:41.655138432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:15:41.655288 containerd[1578]: time="2025-05-27T03:15:41.655148831Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:15:41.655288 containerd[1578]: time="2025-05-27T03:15:41.655170212Z" level=info msg="runtime interface created" May 27 03:15:41.655288 containerd[1578]: time="2025-05-27T03:15:41.655176223Z" level=info msg="created NRI interface" May 27 03:15:41.655288 containerd[1578]: time="2025-05-27T03:15:41.655184078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:15:41.655288 containerd[1578]: time="2025-05-27T03:15:41.655194727Z" level=info msg="Connect containerd service" May 27 03:15:41.655288 containerd[1578]: time="2025-05-27T03:15:41.655219544Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:15:41.656447 containerd[1578]: time="2025-05-27T03:15:41.656412622Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:15:41.828471 sshd_keygen[1598]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:15:41.882091 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:15:41.886214 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:15:41.905175 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:15:41.905480 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:15:41.909922 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:15:41.993100 containerd[1578]: time="2025-05-27T03:15:41.992961602Z" level=info msg="Start subscribing containerd event" May 27 03:15:41.993490 containerd[1578]: time="2025-05-27T03:15:41.993468803Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:15:41.993616 containerd[1578]: time="2025-05-27T03:15:41.993600580Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:15:41.995054 containerd[1578]: time="2025-05-27T03:15:41.995005215Z" level=info msg="Start recovering state" May 27 03:15:41.995966 containerd[1578]: time="2025-05-27T03:15:41.995778095Z" level=info msg="Start event monitor" May 27 03:15:41.996059 containerd[1578]: time="2025-05-27T03:15:41.996038774Z" level=info msg="Start cni network conf syncer for default" May 27 03:15:41.996431 containerd[1578]: time="2025-05-27T03:15:41.996378501Z" level=info msg="Start streaming server" May 27 03:15:41.996431 containerd[1578]: time="2025-05-27T03:15:41.996404439Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:15:41.996741 containerd[1578]: time="2025-05-27T03:15:41.996674526Z" level=info msg="runtime interface starting up..." May 27 03:15:41.996741 containerd[1578]: time="2025-05-27T03:15:41.996691979Z" level=info msg="starting plugins..." May 27 03:15:41.996741 containerd[1578]: time="2025-05-27T03:15:41.996712898Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:15:41.997560 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:15:41.997700 containerd[1578]: time="2025-05-27T03:15:41.997640909Z" level=info msg="containerd successfully booted in 0.373969s" May 27 03:15:42.006738 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:15:42.010052 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:15:42.013212 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:15:42.014582 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:15:42.120080 tar[1574]: linux-amd64/README.md May 27 03:15:42.152875 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:15:42.424802 systemd-networkd[1493]: eth0: Gained IPv6LL May 27 03:15:42.428681 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:15:42.430957 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:15:42.434184 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 03:15:42.437248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:15:42.449693 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:15:42.469316 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 03:15:42.469779 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 03:15:42.471822 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:15:42.479239 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:15:43.794191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:15:43.824287 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:15:43.825933 systemd[1]: Startup finished in 3.813s (kernel) + 6.334s (initrd) + 5.509s (userspace) = 15.657s. May 27 03:15:43.827486 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:15:44.547535 kubelet[1694]: E0527 03:15:44.547413 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:15:44.551944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:15:44.552147 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:15:44.552586 systemd[1]: kubelet.service: Consumed 1.859s CPU time, 266.9M memory peak. May 27 03:15:46.010992 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:15:46.012349 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:60414.service - OpenSSH per-connection server daemon (10.0.0.1:60414). May 27 03:15:46.084457 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 60414 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:15:46.086435 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:15:46.093878 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:15:46.095018 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:15:46.101514 systemd-logind[1565]: New session 1 of user core. May 27 03:15:46.118278 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:15:46.121660 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:15:46.139447 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:15:46.141903 systemd-logind[1565]: New session c1 of user core. May 27 03:15:46.298195 systemd[1711]: Queued start job for default target default.target. May 27 03:15:46.308869 systemd[1711]: Created slice app.slice - User Application Slice. May 27 03:15:46.308895 systemd[1711]: Reached target paths.target - Paths. May 27 03:15:46.308935 systemd[1711]: Reached target timers.target - Timers. May 27 03:15:46.310608 systemd[1711]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:15:46.322083 systemd[1711]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:15:46.322228 systemd[1711]: Reached target sockets.target - Sockets. May 27 03:15:46.322271 systemd[1711]: Reached target basic.target - Basic System. May 27 03:15:46.322344 systemd[1711]: Reached target default.target - Main User Target. May 27 03:15:46.322388 systemd[1711]: Startup finished in 172ms. May 27 03:15:46.322793 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:15:46.324518 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:15:46.392306 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:60422.service - OpenSSH per-connection server daemon (10.0.0.1:60422). May 27 03:15:46.451030 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 60422 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:15:46.452678 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:15:46.457649 systemd-logind[1565]: New session 2 of user core. May 27 03:15:46.471886 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:15:46.527407 sshd[1724]: Connection closed by 10.0.0.1 port 60422 May 27 03:15:46.527798 sshd-session[1722]: pam_unix(sshd:session): session closed for user core May 27 03:15:46.541354 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:60422.service: Deactivated successfully. May 27 03:15:46.543378 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:15:46.544116 systemd-logind[1565]: Session 2 logged out. Waiting for processes to exit. May 27 03:15:46.547441 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:60438.service - OpenSSH per-connection server daemon (10.0.0.1:60438). May 27 03:15:46.548028 systemd-logind[1565]: Removed session 2. May 27 03:15:46.604465 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 60438 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:15:46.606153 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:15:46.610699 systemd-logind[1565]: New session 3 of user core. May 27 03:15:46.621717 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:15:46.672040 sshd[1732]: Connection closed by 10.0.0.1 port 60438 May 27 03:15:46.672427 sshd-session[1730]: pam_unix(sshd:session): session closed for user core May 27 03:15:46.685590 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:60438.service: Deactivated successfully. May 27 03:15:46.687571 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:15:46.688413 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. May 27 03:15:46.691192 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:60452.service - OpenSSH per-connection server daemon (10.0.0.1:60452). May 27 03:15:46.692105 systemd-logind[1565]: Removed session 3. May 27 03:15:46.759183 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 60452 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:15:46.761696 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:15:46.767578 systemd-logind[1565]: New session 4 of user core. May 27 03:15:46.781684 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:15:46.838444 sshd[1740]: Connection closed by 10.0.0.1 port 60452 May 27 03:15:46.838776 sshd-session[1738]: pam_unix(sshd:session): session closed for user core May 27 03:15:46.851381 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:60452.service: Deactivated successfully. May 27 03:15:46.853760 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:15:46.854834 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. May 27 03:15:46.858856 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:60462.service - OpenSSH per-connection server daemon (10.0.0.1:60462). May 27 03:15:46.859573 systemd-logind[1565]: Removed session 4. May 27 03:15:46.920059 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 60462 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:15:46.921662 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:15:46.926645 systemd-logind[1565]: New session 5 of user core. May 27 03:15:46.936686 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:15:46.998643 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:15:46.999060 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:15:47.020407 sudo[1749]: pam_unix(sudo:session): session closed for user root May 27 03:15:47.022601 sshd[1748]: Connection closed by 10.0.0.1 port 60462 May 27 03:15:47.023041 sshd-session[1746]: pam_unix(sshd:session): session closed for user core May 27 03:15:47.038633 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:60462.service: Deactivated successfully. May 27 03:15:47.040700 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:15:47.041577 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. May 27 03:15:47.045131 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:60464.service - OpenSSH per-connection server daemon (10.0.0.1:60464). May 27 03:15:47.046029 systemd-logind[1565]: Removed session 5. May 27 03:15:47.107892 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 60464 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:15:47.109493 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:15:47.114410 systemd-logind[1565]: New session 6 of user core. May 27 03:15:47.127783 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:15:47.183674 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:15:47.184041 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:15:47.365735 sudo[1759]: pam_unix(sudo:session): session closed for user root May 27 03:15:47.372537 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:15:47.372869 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:15:47.383477 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:15:47.437399 augenrules[1781]: No rules May 27 03:15:47.438497 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:15:47.438913 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:15:47.440016 sudo[1758]: pam_unix(sudo:session): session closed for user root May 27 03:15:47.441634 sshd[1757]: Connection closed by 10.0.0.1 port 60464 May 27 03:15:47.441886 sshd-session[1755]: pam_unix(sshd:session): session closed for user core May 27 03:15:47.451333 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:60464.service: Deactivated successfully. May 27 03:15:47.453368 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:15:47.454208 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. May 27 03:15:47.457484 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:60480.service - OpenSSH per-connection server daemon (10.0.0.1:60480). May 27 03:15:47.458254 systemd-logind[1565]: Removed session 6. May 27 03:15:47.509401 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 60480 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:15:47.510636 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:15:47.515909 systemd-logind[1565]: New session 7 of user core. May 27 03:15:47.525739 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:15:47.581494 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:15:47.581943 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:15:48.413749 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:15:48.451284 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:15:49.033002 dockerd[1813]: time="2025-05-27T03:15:49.032918208Z" level=info msg="Starting up" May 27 03:15:49.033945 dockerd[1813]: time="2025-05-27T03:15:49.033911541Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:15:49.512658 dockerd[1813]: time="2025-05-27T03:15:49.512462055Z" level=info msg="Loading containers: start." May 27 03:15:49.523594 kernel: Initializing XFRM netlink socket May 27 03:15:49.789664 systemd-networkd[1493]: docker0: Link UP May 27 03:15:49.795158 dockerd[1813]: time="2025-05-27T03:15:49.795096286Z" level=info msg="Loading containers: done." May 27 03:15:49.857258 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3715396490-merged.mount: Deactivated successfully. May 27 03:15:49.859700 dockerd[1813]: time="2025-05-27T03:15:49.859656483Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:15:49.859792 dockerd[1813]: time="2025-05-27T03:15:49.859765387Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:15:49.859938 dockerd[1813]: time="2025-05-27T03:15:49.859912964Z" level=info msg="Initializing buildkit" May 27 03:15:49.890592 dockerd[1813]: time="2025-05-27T03:15:49.890520645Z" level=info msg="Completed buildkit initialization" May 27 03:15:49.897879 dockerd[1813]: time="2025-05-27T03:15:49.897822100Z" level=info msg="Daemon has completed initialization" May 27 03:15:49.898012 dockerd[1813]: time="2025-05-27T03:15:49.897927298Z" level=info msg="API listen on /run/docker.sock" May 27 03:15:49.898142 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:15:50.521876 containerd[1578]: time="2025-05-27T03:15:50.521796548Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 03:15:51.435736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2114187700.mount: Deactivated successfully. May 27 03:15:52.543654 containerd[1578]: time="2025-05-27T03:15:52.543585817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:52.544618 containerd[1578]: time="2025-05-27T03:15:52.544579441Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 27 03:15:52.545743 containerd[1578]: time="2025-05-27T03:15:52.545683722Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:52.548181 containerd[1578]: time="2025-05-27T03:15:52.548122837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:52.549158 containerd[1578]: time="2025-05-27T03:15:52.549101843Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 2.027237678s" May 27 03:15:52.549158 containerd[1578]: time="2025-05-27T03:15:52.549153410Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 03:15:52.550355 containerd[1578]: time="2025-05-27T03:15:52.550329957Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 03:15:54.131737 containerd[1578]: time="2025-05-27T03:15:54.131679407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:54.132401 containerd[1578]: time="2025-05-27T03:15:54.132345386Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 27 03:15:54.133613 containerd[1578]: time="2025-05-27T03:15:54.133581354Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:54.136123 containerd[1578]: time="2025-05-27T03:15:54.136091212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:54.136998 containerd[1578]: time="2025-05-27T03:15:54.136949321Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.586591973s" May 27 03:15:54.136998 containerd[1578]: time="2025-05-27T03:15:54.136984898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 03:15:54.137706 containerd[1578]: time="2025-05-27T03:15:54.137625440Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 03:15:54.802721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:15:54.805161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:15:55.472446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:15:55.495595 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:15:55.764020 kubelet[2087]: E0527 03:15:55.763720 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:15:55.772616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:15:55.773217 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:15:55.773881 systemd[1]: kubelet.service: Consumed 520ms CPU time, 114.8M memory peak. May 27 03:15:56.100283 containerd[1578]: time="2025-05-27T03:15:56.100146358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:56.101138 containerd[1578]: time="2025-05-27T03:15:56.101089667Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 27 03:15:56.102462 containerd[1578]: time="2025-05-27T03:15:56.102413500Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:56.105163 containerd[1578]: time="2025-05-27T03:15:56.105106612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:56.106019 containerd[1578]: time="2025-05-27T03:15:56.105971995Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.968318703s" May 27 03:15:56.106019 containerd[1578]: time="2025-05-27T03:15:56.106016499Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 03:15:56.106652 containerd[1578]: time="2025-05-27T03:15:56.106618337Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 03:15:57.640663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2909253464.mount: Deactivated successfully. May 27 03:15:57.943480 containerd[1578]: time="2025-05-27T03:15:57.943332568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:57.944219 containerd[1578]: time="2025-05-27T03:15:57.944154420Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 03:15:57.945543 containerd[1578]: time="2025-05-27T03:15:57.945512147Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:57.947451 containerd[1578]: time="2025-05-27T03:15:57.947411379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:57.948128 containerd[1578]: time="2025-05-27T03:15:57.948071076Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.841409578s" May 27 03:15:57.948174 containerd[1578]: time="2025-05-27T03:15:57.948129366Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 03:15:57.948806 containerd[1578]: time="2025-05-27T03:15:57.948772111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 03:15:58.580737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769103484.mount: Deactivated successfully. May 27 03:15:59.792171 containerd[1578]: time="2025-05-27T03:15:59.792103174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:59.794272 containerd[1578]: time="2025-05-27T03:15:59.794199345Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 27 03:15:59.795498 containerd[1578]: time="2025-05-27T03:15:59.795431857Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:59.798354 containerd[1578]: time="2025-05-27T03:15:59.798314574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:15:59.799225 containerd[1578]: time="2025-05-27T03:15:59.799177864Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.850362962s" May 27 03:15:59.799225 containerd[1578]: time="2025-05-27T03:15:59.799210725Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 03:15:59.800366 containerd[1578]: time="2025-05-27T03:15:59.800339523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:16:00.322240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107873912.mount: Deactivated successfully. May 27 03:16:00.329463 containerd[1578]: time="2025-05-27T03:16:00.329414366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:16:00.330236 containerd[1578]: time="2025-05-27T03:16:00.330181234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:16:00.331219 containerd[1578]: time="2025-05-27T03:16:00.331177333Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:16:00.333384 containerd[1578]: time="2025-05-27T03:16:00.333342293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:16:00.335840 containerd[1578]: time="2025-05-27T03:16:00.334765202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 534.389993ms" May 27 03:16:00.335840 containerd[1578]: time="2025-05-27T03:16:00.334807161Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:16:00.336316 containerd[1578]: time="2025-05-27T03:16:00.336277199Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 03:16:03.297916 containerd[1578]: time="2025-05-27T03:16:03.297843490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:03.328067 containerd[1578]: time="2025-05-27T03:16:03.328000947Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 27 03:16:03.380823 containerd[1578]: time="2025-05-27T03:16:03.380766420Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:03.401337 containerd[1578]: time="2025-05-27T03:16:03.401283154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:03.403612 containerd[1578]: time="2025-05-27T03:16:03.403468723Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.067160667s" May 27 03:16:03.403612 containerd[1578]: time="2025-05-27T03:16:03.403533605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 03:16:06.023299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 03:16:06.025431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:06.247309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:06.263073 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:16:06.303314 kubelet[2207]: E0527 03:16:06.303148 2207 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:16:06.308411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:16:06.308676 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:16:06.309146 systemd[1]: kubelet.service: Consumed 217ms CPU time, 109.1M memory peak. May 27 03:16:07.095883 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:07.096049 systemd[1]: kubelet.service: Consumed 217ms CPU time, 109.1M memory peak. May 27 03:16:07.098320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:07.126874 systemd[1]: Reload requested from client PID 2223 ('systemctl') (unit session-7.scope)... May 27 03:16:07.126891 systemd[1]: Reloading... May 27 03:16:07.222738 zram_generator::config[2266]: No configuration found. May 27 03:16:07.770854 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:16:07.888195 systemd[1]: Reloading finished in 760 ms. May 27 03:16:07.957217 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:16:07.957317 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:16:07.957659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:07.957707 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98.2M memory peak. May 27 03:16:07.959399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:08.136489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:08.141511 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:16:08.180046 kubelet[2314]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:16:08.180046 kubelet[2314]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:16:08.180046 kubelet[2314]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:16:08.180463 kubelet[2314]: I0527 03:16:08.180110 2314 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:16:09.916928 kubelet[2314]: I0527 03:16:09.916868 2314 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:16:09.916928 kubelet[2314]: I0527 03:16:09.916907 2314 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:16:09.917470 kubelet[2314]: I0527 03:16:09.917180 2314 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:16:10.018326 kubelet[2314]: E0527 03:16:10.018256 2314 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 03:16:10.020829 kubelet[2314]: I0527 03:16:10.020771 2314 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:16:10.030299 kubelet[2314]: I0527 03:16:10.030264 2314 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:16:10.037086 kubelet[2314]: I0527 03:16:10.037045 2314 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:16:10.037355 kubelet[2314]: I0527 03:16:10.037312 2314 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:16:10.037587 kubelet[2314]: I0527 03:16:10.037343 2314 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:16:10.037712 kubelet[2314]: I0527 03:16:10.037592 2314 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:16:10.037712 kubelet[2314]: I0527 03:16:10.037603 2314 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:16:10.037787 kubelet[2314]: I0527 03:16:10.037762 2314 state_mem.go:36] "Initialized new in-memory state store" May 27 03:16:10.039678 kubelet[2314]: I0527 03:16:10.039643 2314 kubelet.go:480] "Attempting to sync node with API server" May 27 03:16:10.039678 kubelet[2314]: I0527 03:16:10.039665 2314 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:16:10.039757 kubelet[2314]: I0527 03:16:10.039704 2314 kubelet.go:386] "Adding apiserver pod source" May 27 03:16:10.039757 kubelet[2314]: I0527 03:16:10.039726 2314 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:16:10.046134 kubelet[2314]: I0527 03:16:10.046100 2314 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:16:10.046776 kubelet[2314]: I0527 03:16:10.046633 2314 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:16:10.047091 kubelet[2314]: E0527 03:16:10.047064 2314 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 03:16:10.047732 kubelet[2314]: W0527 03:16:10.047692 2314 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:16:10.048653 kubelet[2314]: E0527 03:16:10.048612 2314 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:16:10.050826 kubelet[2314]: I0527 03:16:10.050800 2314 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:16:10.050888 kubelet[2314]: I0527 03:16:10.050860 2314 server.go:1289] "Started kubelet" May 27 03:16:10.053097 kubelet[2314]: I0527 03:16:10.052176 2314 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:16:10.053097 kubelet[2314]: I0527 03:16:10.052351 2314 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:16:10.053665 kubelet[2314]: I0527 03:16:10.053621 2314 server.go:317] "Adding debug handlers to kubelet server" May 27 03:16:10.054426 kubelet[2314]: I0527 03:16:10.054335 2314 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:16:10.054903 kubelet[2314]: I0527 03:16:10.054868 2314 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:16:10.056370 kubelet[2314]: E0527 03:16:10.054154 2314 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184343ee0d17ce11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:16:10.050825745 +0000 UTC m=+1.904909407,LastTimestamp:2025-05-27 03:16:10.050825745 +0000 UTC m=+1.904909407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:16:10.056370 kubelet[2314]: I0527 03:16:10.055999 2314 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:16:10.060566 kubelet[2314]: E0527 03:16:10.058896 2314 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:10.060566 kubelet[2314]: I0527 03:16:10.058951 2314 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:16:10.060566 kubelet[2314]: I0527 03:16:10.059187 2314 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:16:10.060566 kubelet[2314]: I0527 03:16:10.059347 2314 reconciler.go:26] "Reconciler: start to sync state" May 27 03:16:10.060566 kubelet[2314]: E0527 03:16:10.059902 2314 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 03:16:10.060566 kubelet[2314]: I0527 03:16:10.060096 2314 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:16:10.060566 kubelet[2314]: E0527 03:16:10.060275 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" May 27 03:16:10.062112 kubelet[2314]: I0527 03:16:10.062076 2314 factory.go:223] Registration of the containerd container factory successfully May 27 03:16:10.062112 kubelet[2314]: I0527 03:16:10.062100 2314 factory.go:223] Registration of the systemd container factory successfully May 27 03:16:10.062511 kubelet[2314]: E0527 03:16:10.062481 2314 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:16:10.075903 kubelet[2314]: I0527 03:16:10.075865 2314 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:16:10.075903 kubelet[2314]: I0527 03:16:10.075889 2314 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:16:10.075903 kubelet[2314]: I0527 03:16:10.075906 2314 state_mem.go:36] "Initialized new in-memory state store" May 27 03:16:10.080950 kubelet[2314]: I0527 03:16:10.080910 2314 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:16:10.082575 kubelet[2314]: I0527 03:16:10.082532 2314 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:16:10.082575 kubelet[2314]: I0527 03:16:10.082577 2314 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:16:10.082653 kubelet[2314]: I0527 03:16:10.082599 2314 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:16:10.082653 kubelet[2314]: I0527 03:16:10.082612 2314 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:16:10.082705 kubelet[2314]: E0527 03:16:10.082657 2314 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:16:10.083291 kubelet[2314]: E0527 03:16:10.083255 2314 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:16:10.159178 kubelet[2314]: E0527 03:16:10.159136 2314 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:10.183799 kubelet[2314]: E0527 03:16:10.183652 2314 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:16:10.206748 kubelet[2314]: I0527 03:16:10.206710 2314 policy_none.go:49] "None policy: Start" May 27 03:16:10.206817 kubelet[2314]: I0527 03:16:10.206753 2314 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:16:10.206817 kubelet[2314]: I0527 03:16:10.206777 2314 state_mem.go:35] "Initializing new in-memory state store" May 27 03:16:10.259991 kubelet[2314]: E0527 03:16:10.259928 2314 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:10.261587 kubelet[2314]: E0527 03:16:10.261523 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" May 27 03:16:10.360942 kubelet[2314]: E0527 03:16:10.360912 2314 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:10.384147 kubelet[2314]: E0527 03:16:10.384105 2314 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:16:10.443301 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:16:10.458211 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:16:10.461293 kubelet[2314]: E0527 03:16:10.461238 2314 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:10.462369 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:16:10.484908 kubelet[2314]: E0527 03:16:10.484858 2314 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:16:10.485217 kubelet[2314]: I0527 03:16:10.485195 2314 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:16:10.485258 kubelet[2314]: I0527 03:16:10.485217 2314 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:16:10.485495 kubelet[2314]: I0527 03:16:10.485418 2314 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:16:10.486992 kubelet[2314]: E0527 03:16:10.486964 2314 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:16:10.487085 kubelet[2314]: E0527 03:16:10.487039 2314 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 03:16:10.587371 kubelet[2314]: I0527 03:16:10.587307 2314 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:10.587876 kubelet[2314]: E0527 03:16:10.587821 2314 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 27 03:16:10.662784 kubelet[2314]: E0527 03:16:10.662719 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" May 27 03:16:10.789741 kubelet[2314]: I0527 03:16:10.789430 2314 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:10.790540 kubelet[2314]: E0527 03:16:10.790480 2314 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 27 03:16:10.797484 systemd[1]: Created slice kubepods-burstable-pod2e0a7c0ebf5e5258061c10bc4134c4cb.slice - libcontainer container kubepods-burstable-pod2e0a7c0ebf5e5258061c10bc4134c4cb.slice. May 27 03:16:10.808694 kubelet[2314]: E0527 03:16:10.808635 2314 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:10.812265 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 27 03:16:10.823091 kubelet[2314]: E0527 03:16:10.823044 2314 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:10.826436 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 27 03:16:10.828634 kubelet[2314]: E0527 03:16:10.828597 2314 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:10.859381 kubelet[2314]: E0527 03:16:10.859319 2314 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:16:10.863884 kubelet[2314]: I0527 03:16:10.863783 2314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e0a7c0ebf5e5258061c10bc4134c4cb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e0a7c0ebf5e5258061c10bc4134c4cb\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:10.863884 kubelet[2314]: I0527 03:16:10.863843 2314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e0a7c0ebf5e5258061c10bc4134c4cb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2e0a7c0ebf5e5258061c10bc4134c4cb\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:10.863884 kubelet[2314]: I0527 03:16:10.863869 2314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 03:16:10.864041 kubelet[2314]: I0527 03:16:10.863940 2314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:10.864041 kubelet[2314]: I0527 03:16:10.863980 2314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:10.864102 kubelet[2314]: I0527 03:16:10.864051 2314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:10.864102 kubelet[2314]: I0527 03:16:10.864073 2314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:10.864159 kubelet[2314]: I0527 03:16:10.864117 2314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:10.864159 kubelet[2314]: I0527 03:16:10.864153 2314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e0a7c0ebf5e5258061c10bc4134c4cb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e0a7c0ebf5e5258061c10bc4134c4cb\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:11.110632 containerd[1578]: time="2025-05-27T03:16:11.110455017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2e0a7c0ebf5e5258061c10bc4134c4cb,Namespace:kube-system,Attempt:0,}" May 27 03:16:11.124269 containerd[1578]: time="2025-05-27T03:16:11.124207993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 27 03:16:11.130262 containerd[1578]: time="2025-05-27T03:16:11.130210262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 27 03:16:11.146578 containerd[1578]: time="2025-05-27T03:16:11.146494346Z" level=info msg="connecting to shim 2f1d227ad6c63d8e5e436355be7c36b228f553c455c59ade19fcb5a84f8de6c9" address="unix:///run/containerd/s/f7ed6fedbde1603dcca108d9b532aa317a4d241c6fae039ae07e1dcb662ff924" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:11.169799 containerd[1578]: time="2025-05-27T03:16:11.169725521Z" level=info msg="connecting to shim e03081af58f933b2df51053425fcd9e8db8be5aa42e5a919a9c2059acd47e4ae" address="unix:///run/containerd/s/b906b613a514a4cad5967cb377f84d3a45272b564d8bed4405c37d4188daaaba" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:11.176927 containerd[1578]: time="2025-05-27T03:16:11.176839465Z" level=info msg="connecting to shim 85f3c0894b27137073d742015029df3762381e11449c7f89699b70fe3c9a859a" address="unix:///run/containerd/s/a2f6543acb8251d0bbd86e97e95e8be48f549f5ff44c25c1b30f96832fd1bed9" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:11.181825 systemd[1]: Started cri-containerd-2f1d227ad6c63d8e5e436355be7c36b228f553c455c59ade19fcb5a84f8de6c9.scope - libcontainer container 2f1d227ad6c63d8e5e436355be7c36b228f553c455c59ade19fcb5a84f8de6c9. May 27 03:16:11.192742 kubelet[2314]: I0527 03:16:11.192686 2314 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:11.193477 kubelet[2314]: E0527 03:16:11.193433 2314 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 27 03:16:11.209701 systemd[1]: Started cri-containerd-e03081af58f933b2df51053425fcd9e8db8be5aa42e5a919a9c2059acd47e4ae.scope - libcontainer container e03081af58f933b2df51053425fcd9e8db8be5aa42e5a919a9c2059acd47e4ae. May 27 03:16:11.215141 systemd[1]: Started cri-containerd-85f3c0894b27137073d742015029df3762381e11449c7f89699b70fe3c9a859a.scope - libcontainer container 85f3c0894b27137073d742015029df3762381e11449c7f89699b70fe3c9a859a. May 27 03:16:11.261918 containerd[1578]: time="2025-05-27T03:16:11.261860310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2e0a7c0ebf5e5258061c10bc4134c4cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f1d227ad6c63d8e5e436355be7c36b228f553c455c59ade19fcb5a84f8de6c9\"" May 27 03:16:11.268677 containerd[1578]: time="2025-05-27T03:16:11.268462463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"85f3c0894b27137073d742015029df3762381e11449c7f89699b70fe3c9a859a\"" May 27 03:16:11.271947 containerd[1578]: time="2025-05-27T03:16:11.271798841Z" level=info msg="CreateContainer within sandbox \"2f1d227ad6c63d8e5e436355be7c36b228f553c455c59ade19fcb5a84f8de6c9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:16:11.272248 containerd[1578]: time="2025-05-27T03:16:11.272214451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"e03081af58f933b2df51053425fcd9e8db8be5aa42e5a919a9c2059acd47e4ae\"" May 27 03:16:11.279202 containerd[1578]: time="2025-05-27T03:16:11.279162123Z" level=info msg="CreateContainer within sandbox \"85f3c0894b27137073d742015029df3762381e11449c7f89699b70fe3c9a859a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:16:11.290864 containerd[1578]: time="2025-05-27T03:16:11.290836661Z" level=info msg="CreateContainer within sandbox \"e03081af58f933b2df51053425fcd9e8db8be5aa42e5a919a9c2059acd47e4ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:16:11.292339 containerd[1578]: time="2025-05-27T03:16:11.292282703Z" level=info msg="Container 77f1983da22ec569904a451fad1ada91c57729d3a36df2e4ed0d994ba99a21c8: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:11.305830 containerd[1578]: time="2025-05-27T03:16:11.305789057Z" level=info msg="CreateContainer within sandbox \"2f1d227ad6c63d8e5e436355be7c36b228f553c455c59ade19fcb5a84f8de6c9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"77f1983da22ec569904a451fad1ada91c57729d3a36df2e4ed0d994ba99a21c8\"" May 27 03:16:11.306373 containerd[1578]: time="2025-05-27T03:16:11.306341073Z" level=info msg="StartContainer for \"77f1983da22ec569904a451fad1ada91c57729d3a36df2e4ed0d994ba99a21c8\"" May 27 03:16:11.307475 containerd[1578]: time="2025-05-27T03:16:11.307438882Z" level=info msg="connecting to shim 77f1983da22ec569904a451fad1ada91c57729d3a36df2e4ed0d994ba99a21c8" address="unix:///run/containerd/s/f7ed6fedbde1603dcca108d9b532aa317a4d241c6fae039ae07e1dcb662ff924" protocol=ttrpc version=3 May 27 03:16:11.310326 containerd[1578]: time="2025-05-27T03:16:11.310275623Z" level=info msg="Container 49ead0e903de09b63163e2f71932767574171ff6f4686f79cee7b7033dc35e8d: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:11.313362 containerd[1578]: time="2025-05-27T03:16:11.313299304Z" level=info msg="Container 058297ba93d82dea250285d0b479981f7a2ed8f6ff4f351336ef10a2a477cbc6: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:11.319540 containerd[1578]: time="2025-05-27T03:16:11.319378928Z" level=info msg="CreateContainer within sandbox \"85f3c0894b27137073d742015029df3762381e11449c7f89699b70fe3c9a859a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"49ead0e903de09b63163e2f71932767574171ff6f4686f79cee7b7033dc35e8d\"" May 27 03:16:11.320086 containerd[1578]: time="2025-05-27T03:16:11.320051640Z" level=info msg="StartContainer for \"49ead0e903de09b63163e2f71932767574171ff6f4686f79cee7b7033dc35e8d\"" May 27 03:16:11.321440 containerd[1578]: time="2025-05-27T03:16:11.321417281Z" level=info msg="connecting to shim 49ead0e903de09b63163e2f71932767574171ff6f4686f79cee7b7033dc35e8d" address="unix:///run/containerd/s/a2f6543acb8251d0bbd86e97e95e8be48f549f5ff44c25c1b30f96832fd1bed9" protocol=ttrpc version=3 May 27 03:16:11.323565 containerd[1578]: time="2025-05-27T03:16:11.323469441Z" level=info msg="CreateContainer within sandbox \"e03081af58f933b2df51053425fcd9e8db8be5aa42e5a919a9c2059acd47e4ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"058297ba93d82dea250285d0b479981f7a2ed8f6ff4f351336ef10a2a477cbc6\"" May 27 03:16:11.323883 containerd[1578]: time="2025-05-27T03:16:11.323859051Z" level=info msg="StartContainer for \"058297ba93d82dea250285d0b479981f7a2ed8f6ff4f351336ef10a2a477cbc6\"" May 27 03:16:11.325325 containerd[1578]: time="2025-05-27T03:16:11.325303421Z" level=info msg="connecting to shim 058297ba93d82dea250285d0b479981f7a2ed8f6ff4f351336ef10a2a477cbc6" address="unix:///run/containerd/s/b906b613a514a4cad5967cb377f84d3a45272b564d8bed4405c37d4188daaaba" protocol=ttrpc version=3 May 27 03:16:11.330083 systemd[1]: Started cri-containerd-77f1983da22ec569904a451fad1ada91c57729d3a36df2e4ed0d994ba99a21c8.scope - libcontainer container 77f1983da22ec569904a451fad1ada91c57729d3a36df2e4ed0d994ba99a21c8. May 27 03:16:11.348763 systemd[1]: Started cri-containerd-49ead0e903de09b63163e2f71932767574171ff6f4686f79cee7b7033dc35e8d.scope - libcontainer container 49ead0e903de09b63163e2f71932767574171ff6f4686f79cee7b7033dc35e8d. May 27 03:16:11.354587 systemd[1]: Started cri-containerd-058297ba93d82dea250285d0b479981f7a2ed8f6ff4f351336ef10a2a477cbc6.scope - libcontainer container 058297ba93d82dea250285d0b479981f7a2ed8f6ff4f351336ef10a2a477cbc6. May 27 03:16:11.395472 containerd[1578]: time="2025-05-27T03:16:11.393956955Z" level=info msg="StartContainer for \"77f1983da22ec569904a451fad1ada91c57729d3a36df2e4ed0d994ba99a21c8\" returns successfully" May 27 03:16:11.410014 containerd[1578]: time="2025-05-27T03:16:11.409967286Z" level=info msg="StartContainer for \"49ead0e903de09b63163e2f71932767574171ff6f4686f79cee7b7033dc35e8d\" returns successfully" May 27 03:16:11.420710 containerd[1578]: time="2025-05-27T03:16:11.420667136Z" level=info msg="StartContainer for \"058297ba93d82dea250285d0b479981f7a2ed8f6ff4f351336ef10a2a477cbc6\" returns successfully" May 27 03:16:11.442157 kubelet[2314]: E0527 03:16:11.442110 2314 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:16:11.995820 kubelet[2314]: I0527 03:16:11.995724 2314 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:12.091174 kubelet[2314]: E0527 03:16:12.091131 2314 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:12.095037 kubelet[2314]: E0527 03:16:12.095008 2314 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:12.098381 kubelet[2314]: E0527 03:16:12.098353 2314 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:13.046165 kubelet[2314]: I0527 03:16:13.046113 2314 apiserver.go:52] "Watching apiserver" May 27 03:16:13.051466 kubelet[2314]: E0527 03:16:13.051422 2314 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 03:16:13.059711 kubelet[2314]: I0527 03:16:13.059678 2314 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:16:13.100693 kubelet[2314]: E0527 03:16:13.100641 2314 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:13.100911 kubelet[2314]: E0527 03:16:13.100873 2314 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:13.139168 kubelet[2314]: I0527 03:16:13.139064 2314 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:16:13.139168 kubelet[2314]: E0527 03:16:13.139111 2314 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 03:16:13.160247 kubelet[2314]: I0527 03:16:13.160183 2314 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:16:13.167961 kubelet[2314]: E0527 03:16:13.167906 2314 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 03:16:13.167961 kubelet[2314]: I0527 03:16:13.167939 2314 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:16:13.170014 kubelet[2314]: E0527 03:16:13.169969 2314 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 03:16:13.170014 kubelet[2314]: I0527 03:16:13.170006 2314 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:16:13.171873 kubelet[2314]: E0527 03:16:13.171844 2314 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 03:16:14.392477 kubelet[2314]: I0527 03:16:14.392428 2314 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:16:16.084622 systemd[1]: Reload requested from client PID 2600 ('systemctl') (unit session-7.scope)... May 27 03:16:16.084954 systemd[1]: Reloading... May 27 03:16:16.164592 zram_generator::config[2643]: No configuration found. May 27 03:16:16.233069 kubelet[2314]: I0527 03:16:16.233033 2314 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:16:16.262378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:16:16.264614 kubelet[2314]: I0527 03:16:16.264123 2314 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:16:16.396485 systemd[1]: Reloading finished in 311 ms. May 27 03:16:16.424795 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:16.447147 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:16:16.447461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:16.447529 systemd[1]: kubelet.service: Consumed 1.291s CPU time, 132.6M memory peak. May 27 03:16:16.449607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:16.672561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:16.687047 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:16:16.729677 kubelet[2688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:16:16.729677 kubelet[2688]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:16:16.729677 kubelet[2688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:16:16.730203 kubelet[2688]: I0527 03:16:16.729716 2688 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:16:16.738870 kubelet[2688]: I0527 03:16:16.738814 2688 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:16:16.738870 kubelet[2688]: I0527 03:16:16.738846 2688 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:16:16.739120 kubelet[2688]: I0527 03:16:16.739091 2688 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:16:16.740402 kubelet[2688]: I0527 03:16:16.740367 2688 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 03:16:16.742660 kubelet[2688]: I0527 03:16:16.742505 2688 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:16:16.746895 kubelet[2688]: I0527 03:16:16.746870 2688 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:16:16.751949 kubelet[2688]: I0527 03:16:16.751901 2688 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:16:16.752240 kubelet[2688]: I0527 03:16:16.752202 2688 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:16:16.752405 kubelet[2688]: I0527 03:16:16.752237 2688 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:16:16.752405 kubelet[2688]: I0527 03:16:16.752403 2688 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:16:16.752566 kubelet[2688]: I0527 03:16:16.752414 2688 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:16:16.752566 kubelet[2688]: I0527 03:16:16.752480 2688 state_mem.go:36] "Initialized new in-memory state store" May 27 03:16:16.752988 kubelet[2688]: I0527 03:16:16.752963 2688 kubelet.go:480] "Attempting to sync node with API server" May 27 03:16:16.753029 kubelet[2688]: I0527 03:16:16.752991 2688 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:16:16.753029 kubelet[2688]: I0527 03:16:16.753025 2688 kubelet.go:386] "Adding apiserver pod source" May 27 03:16:16.753078 kubelet[2688]: I0527 03:16:16.753047 2688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:16:16.757574 kubelet[2688]: I0527 03:16:16.756244 2688 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:16:16.757574 kubelet[2688]: I0527 03:16:16.756757 2688 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:16:16.760677 kubelet[2688]: I0527 03:16:16.760649 2688 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:16:16.760800 kubelet[2688]: I0527 03:16:16.760715 2688 server.go:1289] "Started kubelet" May 27 03:16:16.763237 kubelet[2688]: I0527 03:16:16.763212 2688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:16:16.767495 kubelet[2688]: I0527 03:16:16.767434 2688 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:16:16.768424 kubelet[2688]: I0527 03:16:16.768395 2688 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:16:16.768531 kubelet[2688]: I0527 03:16:16.768513 2688 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:16:16.768704 kubelet[2688]: I0527 03:16:16.768683 2688 reconciler.go:26] "Reconciler: start to sync state" May 27 03:16:16.768945 kubelet[2688]: I0527 03:16:16.768885 2688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:16:16.769242 kubelet[2688]: I0527 03:16:16.769207 2688 server.go:317] "Adding debug handlers to kubelet server" May 27 03:16:16.770067 kubelet[2688]: I0527 03:16:16.769235 2688 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:16:16.770423 kubelet[2688]: I0527 03:16:16.770409 2688 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:16:16.773012 kubelet[2688]: I0527 03:16:16.772964 2688 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:16:16.774203 kubelet[2688]: E0527 03:16:16.774182 2688 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:16:16.774572 kubelet[2688]: I0527 03:16:16.774534 2688 factory.go:223] Registration of the containerd container factory successfully May 27 03:16:16.774641 kubelet[2688]: I0527 03:16:16.774631 2688 factory.go:223] Registration of the systemd container factory successfully May 27 03:16:16.779406 kubelet[2688]: I0527 03:16:16.779354 2688 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:16:16.781258 kubelet[2688]: I0527 03:16:16.781232 2688 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:16:16.781258 kubelet[2688]: I0527 03:16:16.781259 2688 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:16:16.781559 kubelet[2688]: I0527 03:16:16.781278 2688 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:16:16.781559 kubelet[2688]: I0527 03:16:16.781290 2688 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:16:16.781559 kubelet[2688]: E0527 03:16:16.781328 2688 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:16:16.817258 kubelet[2688]: I0527 03:16:16.817198 2688 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:16:16.817258 kubelet[2688]: I0527 03:16:16.817235 2688 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:16:16.817258 kubelet[2688]: I0527 03:16:16.817263 2688 state_mem.go:36] "Initialized new in-memory state store" May 27 03:16:16.817461 kubelet[2688]: I0527 03:16:16.817441 2688 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:16:16.817490 kubelet[2688]: I0527 03:16:16.817460 2688 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:16:16.817511 kubelet[2688]: I0527 03:16:16.817498 2688 policy_none.go:49] "None policy: Start" May 27 03:16:16.817559 kubelet[2688]: I0527 03:16:16.817519 2688 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:16:16.817559 kubelet[2688]: I0527 03:16:16.817536 2688 state_mem.go:35] "Initializing new in-memory state store" May 27 03:16:16.817711 kubelet[2688]: I0527 03:16:16.817683 2688 state_mem.go:75] "Updated machine memory state" May 27 03:16:16.823900 kubelet[2688]: E0527 03:16:16.823866 2688 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:16:16.824144 kubelet[2688]: I0527 03:16:16.824120 2688 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:16:16.824185 kubelet[2688]: I0527 03:16:16.824148 2688 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:16:16.824536 kubelet[2688]: I0527 03:16:16.824510 2688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:16:16.826964 kubelet[2688]: E0527 03:16:16.826795 2688 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:16:16.882693 kubelet[2688]: I0527 03:16:16.882627 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:16:16.882693 kubelet[2688]: I0527 03:16:16.882669 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:16:16.883043 kubelet[2688]: I0527 03:16:16.882848 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:16:16.888179 kubelet[2688]: E0527 03:16:16.888143 2688 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:16:16.888619 kubelet[2688]: E0527 03:16:16.888574 2688 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 27 03:16:16.888836 kubelet[2688]: E0527 03:16:16.888790 2688 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:16:16.936273 kubelet[2688]: I0527 03:16:16.936127 2688 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:16.948627 kubelet[2688]: I0527 03:16:16.948537 2688 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 03:16:16.948815 kubelet[2688]: I0527 03:16:16.948678 2688 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:16:16.969946 kubelet[2688]: I0527 03:16:16.969871 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:16.969946 kubelet[2688]: I0527 03:16:16.969931 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:16.969946 kubelet[2688]: I0527 03:16:16.969957 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 03:16:16.970167 kubelet[2688]: I0527 03:16:16.969976 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e0a7c0ebf5e5258061c10bc4134c4cb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e0a7c0ebf5e5258061c10bc4134c4cb\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:16.970167 kubelet[2688]: I0527 03:16:16.970000 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e0a7c0ebf5e5258061c10bc4134c4cb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2e0a7c0ebf5e5258061c10bc4134c4cb\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:16.970167 kubelet[2688]: I0527 03:16:16.970067 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:16.970230 kubelet[2688]: I0527 03:16:16.970155 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:16.970259 kubelet[2688]: I0527 03:16:16.970197 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:16.970280 kubelet[2688]: I0527 03:16:16.970250 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e0a7c0ebf5e5258061c10bc4134c4cb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e0a7c0ebf5e5258061c10bc4134c4cb\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:17.754155 kubelet[2688]: I0527 03:16:17.754090 2688 apiserver.go:52] "Watching apiserver" May 27 03:16:17.768809 kubelet[2688]: I0527 03:16:17.768771 2688 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:16:17.796886 kubelet[2688]: I0527 03:16:17.796851 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:16:17.796965 kubelet[2688]: I0527 03:16:17.796920 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:16:17.796997 kubelet[2688]: I0527 03:16:17.796981 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:16:17.803160 kubelet[2688]: E0527 03:16:17.803128 2688 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:16:17.803766 kubelet[2688]: E0527 03:16:17.803638 2688 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:16:17.804011 kubelet[2688]: E0527 03:16:17.803986 2688 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 27 03:16:17.813776 kubelet[2688]: I0527 03:16:17.813717 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8136746590000001 podStartE2EDuration="1.813674659s" podCreationTimestamp="2025-05-27 03:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:16:17.813457737 +0000 UTC m=+1.121263002" watchObservedRunningTime="2025-05-27 03:16:17.813674659 +0000 UTC m=+1.121479924" May 27 03:16:17.826170 kubelet[2688]: I0527 03:16:17.826091 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.826076814 podStartE2EDuration="3.826076814s" podCreationTimestamp="2025-05-27 03:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:16:17.819839192 +0000 UTC m=+1.127644467" watchObservedRunningTime="2025-05-27 03:16:17.826076814 +0000 UTC m=+1.133882069" May 27 03:16:17.836931 kubelet[2688]: I0527 03:16:17.836858 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8368362710000001 podStartE2EDuration="1.836836271s" podCreationTimestamp="2025-05-27 03:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:16:17.826261615 +0000 UTC m=+1.134066880" watchObservedRunningTime="2025-05-27 03:16:17.836836271 +0000 UTC m=+1.144641536" May 27 03:16:20.953763 kubelet[2688]: I0527 03:16:20.953720 2688 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:16:20.954160 containerd[1578]: time="2025-05-27T03:16:20.954064952Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:16:20.954411 kubelet[2688]: I0527 03:16:20.954231 2688 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:16:22.137947 systemd[1]: Created slice kubepods-besteffort-pod64600dfc_5e10_4f9c_bd31_c3b86b1018ec.slice - libcontainer container kubepods-besteffort-pod64600dfc_5e10_4f9c_bd31_c3b86b1018ec.slice. May 27 03:16:22.176588 systemd[1]: Created slice kubepods-besteffort-pod5b39212f_efe2_4840_add0_d7cdf380b8da.slice - libcontainer container kubepods-besteffort-pod5b39212f_efe2_4840_add0_d7cdf380b8da.slice. May 27 03:16:22.203508 kubelet[2688]: I0527 03:16:22.203420 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64600dfc-5e10-4f9c-bd31-c3b86b1018ec-xtables-lock\") pod \"kube-proxy-fhjqg\" (UID: \"64600dfc-5e10-4f9c-bd31-c3b86b1018ec\") " pod="kube-system/kube-proxy-fhjqg" May 27 03:16:22.203508 kubelet[2688]: I0527 03:16:22.203479 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzpxc\" (UniqueName: \"kubernetes.io/projected/5b39212f-efe2-4840-add0-d7cdf380b8da-kube-api-access-nzpxc\") pod \"tigera-operator-844669ff44-rm7c4\" (UID: \"5b39212f-efe2-4840-add0-d7cdf380b8da\") " pod="tigera-operator/tigera-operator-844669ff44-rm7c4" May 27 03:16:22.203508 kubelet[2688]: I0527 03:16:22.203496 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64600dfc-5e10-4f9c-bd31-c3b86b1018ec-lib-modules\") pod \"kube-proxy-fhjqg\" (UID: \"64600dfc-5e10-4f9c-bd31-c3b86b1018ec\") " pod="kube-system/kube-proxy-fhjqg" May 27 03:16:22.204121 kubelet[2688]: I0527 03:16:22.203620 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2gb\" (UniqueName: \"kubernetes.io/projected/64600dfc-5e10-4f9c-bd31-c3b86b1018ec-kube-api-access-gt2gb\") pod \"kube-proxy-fhjqg\" (UID: \"64600dfc-5e10-4f9c-bd31-c3b86b1018ec\") " pod="kube-system/kube-proxy-fhjqg" May 27 03:16:22.204121 kubelet[2688]: I0527 03:16:22.203681 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/64600dfc-5e10-4f9c-bd31-c3b86b1018ec-kube-proxy\") pod \"kube-proxy-fhjqg\" (UID: \"64600dfc-5e10-4f9c-bd31-c3b86b1018ec\") " pod="kube-system/kube-proxy-fhjqg" May 27 03:16:22.204121 kubelet[2688]: I0527 03:16:22.203713 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5b39212f-efe2-4840-add0-d7cdf380b8da-var-lib-calico\") pod \"tigera-operator-844669ff44-rm7c4\" (UID: \"5b39212f-efe2-4840-add0-d7cdf380b8da\") " pod="tigera-operator/tigera-operator-844669ff44-rm7c4" May 27 03:16:22.451349 containerd[1578]: time="2025-05-27T03:16:22.451200756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fhjqg,Uid:64600dfc-5e10-4f9c-bd31-c3b86b1018ec,Namespace:kube-system,Attempt:0,}" May 27 03:16:22.472159 containerd[1578]: time="2025-05-27T03:16:22.472109130Z" level=info msg="connecting to shim c55949f45f1b3c3a358a5a27504f7bc929bb8521422d737cf0b5ca3c615d1f85" address="unix:///run/containerd/s/a489725954faf19579ee51df6da1bc68cc0a5caad16ca12a129128b85ad9a57f" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:22.481911 containerd[1578]: time="2025-05-27T03:16:22.481869678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-rm7c4,Uid:5b39212f-efe2-4840-add0-d7cdf380b8da,Namespace:tigera-operator,Attempt:0,}" May 27 03:16:22.499718 systemd[1]: Started cri-containerd-c55949f45f1b3c3a358a5a27504f7bc929bb8521422d737cf0b5ca3c615d1f85.scope - libcontainer container c55949f45f1b3c3a358a5a27504f7bc929bb8521422d737cf0b5ca3c615d1f85. May 27 03:16:22.506838 containerd[1578]: time="2025-05-27T03:16:22.506775385Z" level=info msg="connecting to shim d0c0c0896b53aa2e36d7158e015595428ce53312d75d52b6151a23a7a56b0508" address="unix:///run/containerd/s/03ec380759d93fafd65169ee5942670041e9635814ba5850813b9fe4ebab6fc7" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:22.535339 containerd[1578]: time="2025-05-27T03:16:22.535283308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fhjqg,Uid:64600dfc-5e10-4f9c-bd31-c3b86b1018ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"c55949f45f1b3c3a358a5a27504f7bc929bb8521422d737cf0b5ca3c615d1f85\"" May 27 03:16:22.537866 systemd[1]: Started cri-containerd-d0c0c0896b53aa2e36d7158e015595428ce53312d75d52b6151a23a7a56b0508.scope - libcontainer container d0c0c0896b53aa2e36d7158e015595428ce53312d75d52b6151a23a7a56b0508. May 27 03:16:22.542599 containerd[1578]: time="2025-05-27T03:16:22.541699419Z" level=info msg="CreateContainer within sandbox \"c55949f45f1b3c3a358a5a27504f7bc929bb8521422d737cf0b5ca3c615d1f85\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:16:22.553482 containerd[1578]: time="2025-05-27T03:16:22.553439984Z" level=info msg="Container 4b19fdacfdf5149c57c3c4819a51baf0ef9d1fa039953dd679b00d741411a852: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:22.562901 containerd[1578]: time="2025-05-27T03:16:22.562852975Z" level=info msg="CreateContainer within sandbox \"c55949f45f1b3c3a358a5a27504f7bc929bb8521422d737cf0b5ca3c615d1f85\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b19fdacfdf5149c57c3c4819a51baf0ef9d1fa039953dd679b00d741411a852\"" May 27 03:16:22.563799 containerd[1578]: time="2025-05-27T03:16:22.563765161Z" level=info msg="StartContainer for \"4b19fdacfdf5149c57c3c4819a51baf0ef9d1fa039953dd679b00d741411a852\"" May 27 03:16:22.566363 containerd[1578]: time="2025-05-27T03:16:22.566329152Z" level=info msg="connecting to shim 4b19fdacfdf5149c57c3c4819a51baf0ef9d1fa039953dd679b00d741411a852" address="unix:///run/containerd/s/a489725954faf19579ee51df6da1bc68cc0a5caad16ca12a129128b85ad9a57f" protocol=ttrpc version=3 May 27 03:16:22.588690 systemd[1]: Started cri-containerd-4b19fdacfdf5149c57c3c4819a51baf0ef9d1fa039953dd679b00d741411a852.scope - libcontainer container 4b19fdacfdf5149c57c3c4819a51baf0ef9d1fa039953dd679b00d741411a852. May 27 03:16:22.594945 containerd[1578]: time="2025-05-27T03:16:22.594881169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-rm7c4,Uid:5b39212f-efe2-4840-add0-d7cdf380b8da,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d0c0c0896b53aa2e36d7158e015595428ce53312d75d52b6151a23a7a56b0508\"" May 27 03:16:22.596969 containerd[1578]: time="2025-05-27T03:16:22.596930938Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:16:22.636283 containerd[1578]: time="2025-05-27T03:16:22.636232354Z" level=info msg="StartContainer for \"4b19fdacfdf5149c57c3c4819a51baf0ef9d1fa039953dd679b00d741411a852\" returns successfully" May 27 03:16:22.821259 kubelet[2688]: I0527 03:16:22.821169 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fhjqg" podStartSLOduration=0.821147481 podStartE2EDuration="821.147481ms" podCreationTimestamp="2025-05-27 03:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:16:22.820714722 +0000 UTC m=+6.128519997" watchObservedRunningTime="2025-05-27 03:16:22.821147481 +0000 UTC m=+6.128952746" May 27 03:16:25.142685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659107456.mount: Deactivated successfully. May 27 03:16:25.500855 containerd[1578]: time="2025-05-27T03:16:25.500799614Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:25.501512 containerd[1578]: time="2025-05-27T03:16:25.501474689Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:16:25.502508 containerd[1578]: time="2025-05-27T03:16:25.502469719Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:25.504419 containerd[1578]: time="2025-05-27T03:16:25.504378215Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:25.504982 containerd[1578]: time="2025-05-27T03:16:25.504940817Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.907981146s" May 27 03:16:25.504982 containerd[1578]: time="2025-05-27T03:16:25.504971225Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:16:25.509511 containerd[1578]: time="2025-05-27T03:16:25.509465546Z" level=info msg="CreateContainer within sandbox \"d0c0c0896b53aa2e36d7158e015595428ce53312d75d52b6151a23a7a56b0508\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:16:25.515608 containerd[1578]: time="2025-05-27T03:16:25.515575920Z" level=info msg="Container 17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:25.521754 containerd[1578]: time="2025-05-27T03:16:25.521714539Z" level=info msg="CreateContainer within sandbox \"d0c0c0896b53aa2e36d7158e015595428ce53312d75d52b6151a23a7a56b0508\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f\"" May 27 03:16:25.522139 containerd[1578]: time="2025-05-27T03:16:25.522112400Z" level=info msg="StartContainer for \"17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f\"" May 27 03:16:25.523021 containerd[1578]: time="2025-05-27T03:16:25.522994997Z" level=info msg="connecting to shim 17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f" address="unix:///run/containerd/s/03ec380759d93fafd65169ee5942670041e9635814ba5850813b9fe4ebab6fc7" protocol=ttrpc version=3 May 27 03:16:25.583802 systemd[1]: Started cri-containerd-17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f.scope - libcontainer container 17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f. May 27 03:16:25.613456 containerd[1578]: time="2025-05-27T03:16:25.613413535Z" level=info msg="StartContainer for \"17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f\" returns successfully" May 27 03:16:25.825788 kubelet[2688]: I0527 03:16:25.825533 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-rm7c4" podStartSLOduration=0.916009331 podStartE2EDuration="3.825480274s" podCreationTimestamp="2025-05-27 03:16:22 +0000 UTC" firstStartedPulling="2025-05-27 03:16:22.596158466 +0000 UTC m=+5.903963731" lastFinishedPulling="2025-05-27 03:16:25.505629409 +0000 UTC m=+8.813434674" observedRunningTime="2025-05-27 03:16:25.825116947 +0000 UTC m=+9.132922212" watchObservedRunningTime="2025-05-27 03:16:25.825480274 +0000 UTC m=+9.133285559" May 27 03:16:26.168826 update_engine[1568]: I20250527 03:16:26.168649 1568 update_attempter.cc:509] Updating boot flags... May 27 03:16:27.601154 systemd[1]: cri-containerd-17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f.scope: Deactivated successfully. May 27 03:16:27.601963 containerd[1578]: time="2025-05-27T03:16:27.601299854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f\" id:\"17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f\" pid:3020 exit_status:1 exited_at:{seconds:1748315787 nanos:600753091}" May 27 03:16:27.601963 containerd[1578]: time="2025-05-27T03:16:27.601402457Z" level=info msg="received exit event container_id:\"17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f\" id:\"17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f\" pid:3020 exit_status:1 exited_at:{seconds:1748315787 nanos:600753091}" May 27 03:16:27.632350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f-rootfs.mount: Deactivated successfully. May 27 03:16:28.826331 kubelet[2688]: I0527 03:16:28.826284 2688 scope.go:117] "RemoveContainer" containerID="17929c3f7716eb196ca4e3ec6b1af47b7e05de107ce562744d4a33099abdd52f" May 27 03:16:28.828196 containerd[1578]: time="2025-05-27T03:16:28.828126112Z" level=info msg="CreateContainer within sandbox \"d0c0c0896b53aa2e36d7158e015595428ce53312d75d52b6151a23a7a56b0508\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 27 03:16:29.215039 containerd[1578]: time="2025-05-27T03:16:29.214978327Z" level=info msg="Container a7e9bb7e920f9cb50fcad4a51033706927481b184ff52c3639cc33f2d9f3aed6: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:29.382705 containerd[1578]: time="2025-05-27T03:16:29.382648718Z" level=info msg="CreateContainer within sandbox \"d0c0c0896b53aa2e36d7158e015595428ce53312d75d52b6151a23a7a56b0508\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a7e9bb7e920f9cb50fcad4a51033706927481b184ff52c3639cc33f2d9f3aed6\"" May 27 03:16:29.383235 containerd[1578]: time="2025-05-27T03:16:29.383188847Z" level=info msg="StartContainer for \"a7e9bb7e920f9cb50fcad4a51033706927481b184ff52c3639cc33f2d9f3aed6\"" May 27 03:16:29.384190 containerd[1578]: time="2025-05-27T03:16:29.384157124Z" level=info msg="connecting to shim a7e9bb7e920f9cb50fcad4a51033706927481b184ff52c3639cc33f2d9f3aed6" address="unix:///run/containerd/s/03ec380759d93fafd65169ee5942670041e9635814ba5850813b9fe4ebab6fc7" protocol=ttrpc version=3 May 27 03:16:29.408666 systemd[1]: Started cri-containerd-a7e9bb7e920f9cb50fcad4a51033706927481b184ff52c3639cc33f2d9f3aed6.scope - libcontainer container a7e9bb7e920f9cb50fcad4a51033706927481b184ff52c3639cc33f2d9f3aed6. May 27 03:16:29.501250 containerd[1578]: time="2025-05-27T03:16:29.501118097Z" level=info msg="StartContainer for \"a7e9bb7e920f9cb50fcad4a51033706927481b184ff52c3639cc33f2d9f3aed6\" returns successfully" May 27 03:16:30.789869 sudo[1793]: pam_unix(sudo:session): session closed for user root May 27 03:16:30.791433 sshd[1792]: Connection closed by 10.0.0.1 port 60480 May 27 03:16:30.791966 sshd-session[1790]: pam_unix(sshd:session): session closed for user core May 27 03:16:30.796757 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:60480.service: Deactivated successfully. May 27 03:16:30.799120 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:16:30.799340 systemd[1]: session-7.scope: Consumed 6.415s CPU time, 223.1M memory peak. May 27 03:16:30.801253 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. May 27 03:16:30.802938 systemd-logind[1565]: Removed session 7. May 27 03:16:36.380190 systemd[1]: Created slice kubepods-besteffort-podaedc04ca_7099_43ca_a2cf_5885835c9c78.slice - libcontainer container kubepods-besteffort-podaedc04ca_7099_43ca_a2cf_5885835c9c78.slice. May 27 03:16:36.391985 kubelet[2688]: I0527 03:16:36.391869 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aedc04ca-7099-43ca-a2cf-5885835c9c78-tigera-ca-bundle\") pod \"calico-typha-5796bfdc9f-zqx9q\" (UID: \"aedc04ca-7099-43ca-a2cf-5885835c9c78\") " pod="calico-system/calico-typha-5796bfdc9f-zqx9q" May 27 03:16:36.391985 kubelet[2688]: I0527 03:16:36.391911 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aedc04ca-7099-43ca-a2cf-5885835c9c78-typha-certs\") pod \"calico-typha-5796bfdc9f-zqx9q\" (UID: \"aedc04ca-7099-43ca-a2cf-5885835c9c78\") " pod="calico-system/calico-typha-5796bfdc9f-zqx9q" May 27 03:16:36.391985 kubelet[2688]: I0527 03:16:36.391929 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzdr5\" (UniqueName: \"kubernetes.io/projected/aedc04ca-7099-43ca-a2cf-5885835c9c78-kube-api-access-tzdr5\") pod \"calico-typha-5796bfdc9f-zqx9q\" (UID: \"aedc04ca-7099-43ca-a2cf-5885835c9c78\") " pod="calico-system/calico-typha-5796bfdc9f-zqx9q" May 27 03:16:36.444714 systemd[1]: Created slice kubepods-besteffort-pode2dcf363_46c5_4268_bce2_96ef6752cc79.slice - libcontainer container kubepods-besteffort-pode2dcf363_46c5_4268_bce2_96ef6752cc79.slice. May 27 03:16:36.492968 kubelet[2688]: I0527 03:16:36.492868 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2dcf363-46c5-4268-bce2-96ef6752cc79-lib-modules\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.492968 kubelet[2688]: I0527 03:16:36.492911 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e2dcf363-46c5-4268-bce2-96ef6752cc79-var-run-calico\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.492968 kubelet[2688]: I0527 03:16:36.492931 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e2dcf363-46c5-4268-bce2-96ef6752cc79-cni-bin-dir\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.492968 kubelet[2688]: I0527 03:16:36.492951 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e2dcf363-46c5-4268-bce2-96ef6752cc79-policysync\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.492968 kubelet[2688]: I0527 03:16:36.492968 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2dcf363-46c5-4268-bce2-96ef6752cc79-xtables-lock\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.493249 kubelet[2688]: I0527 03:16:36.492984 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwb4v\" (UniqueName: \"kubernetes.io/projected/e2dcf363-46c5-4268-bce2-96ef6752cc79-kube-api-access-bwb4v\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.493249 kubelet[2688]: I0527 03:16:36.492999 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e2dcf363-46c5-4268-bce2-96ef6752cc79-node-certs\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.493249 kubelet[2688]: I0527 03:16:36.493016 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2dcf363-46c5-4268-bce2-96ef6752cc79-tigera-ca-bundle\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.493249 kubelet[2688]: I0527 03:16:36.493041 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e2dcf363-46c5-4268-bce2-96ef6752cc79-flexvol-driver-host\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.493374 kubelet[2688]: I0527 03:16:36.493337 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2dcf363-46c5-4268-bce2-96ef6752cc79-var-lib-calico\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.493539 kubelet[2688]: I0527 03:16:36.493388 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e2dcf363-46c5-4268-bce2-96ef6752cc79-cni-log-dir\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.493539 kubelet[2688]: I0527 03:16:36.493403 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e2dcf363-46c5-4268-bce2-96ef6752cc79-cni-net-dir\") pod \"calico-node-qpfhl\" (UID: \"e2dcf363-46c5-4268-bce2-96ef6752cc79\") " pod="calico-system/calico-node-qpfhl" May 27 03:16:36.547102 kubelet[2688]: E0527 03:16:36.547042 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ccgt" podUID="4f76a0d4-a083-4947-aa3e-c00e6bb39edf" May 27 03:16:36.594175 kubelet[2688]: I0527 03:16:36.594125 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz49h\" (UniqueName: \"kubernetes.io/projected/4f76a0d4-a083-4947-aa3e-c00e6bb39edf-kube-api-access-xz49h\") pod \"csi-node-driver-9ccgt\" (UID: \"4f76a0d4-a083-4947-aa3e-c00e6bb39edf\") " pod="calico-system/csi-node-driver-9ccgt" May 27 03:16:36.594175 kubelet[2688]: I0527 03:16:36.594181 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f76a0d4-a083-4947-aa3e-c00e6bb39edf-socket-dir\") pod \"csi-node-driver-9ccgt\" (UID: \"4f76a0d4-a083-4947-aa3e-c00e6bb39edf\") " pod="calico-system/csi-node-driver-9ccgt" May 27 03:16:36.594385 kubelet[2688]: I0527 03:16:36.594221 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f76a0d4-a083-4947-aa3e-c00e6bb39edf-kubelet-dir\") pod \"csi-node-driver-9ccgt\" (UID: \"4f76a0d4-a083-4947-aa3e-c00e6bb39edf\") " pod="calico-system/csi-node-driver-9ccgt" May 27 03:16:36.594385 kubelet[2688]: I0527 03:16:36.594291 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f76a0d4-a083-4947-aa3e-c00e6bb39edf-registration-dir\") pod \"csi-node-driver-9ccgt\" (UID: \"4f76a0d4-a083-4947-aa3e-c00e6bb39edf\") " pod="calico-system/csi-node-driver-9ccgt" May 27 03:16:36.594385 kubelet[2688]: I0527 03:16:36.594336 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4f76a0d4-a083-4947-aa3e-c00e6bb39edf-varrun\") pod \"csi-node-driver-9ccgt\" (UID: \"4f76a0d4-a083-4947-aa3e-c00e6bb39edf\") " pod="calico-system/csi-node-driver-9ccgt" May 27 03:16:36.600492 kubelet[2688]: E0527 03:16:36.600444 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.603577 kubelet[2688]: W0527 03:16:36.603309 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.604252 kubelet[2688]: E0527 03:16:36.604221 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.604498 kubelet[2688]: E0527 03:16:36.604481 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.604533 kubelet[2688]: W0527 03:16:36.604495 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.604533 kubelet[2688]: E0527 03:16:36.604514 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.604784 kubelet[2688]: E0527 03:16:36.604767 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.604784 kubelet[2688]: W0527 03:16:36.604780 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.604845 kubelet[2688]: E0527 03:16:36.604790 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.687280 containerd[1578]: time="2025-05-27T03:16:36.687169893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5796bfdc9f-zqx9q,Uid:aedc04ca-7099-43ca-a2cf-5885835c9c78,Namespace:calico-system,Attempt:0,}" May 27 03:16:36.695717 kubelet[2688]: E0527 03:16:36.695680 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.695717 kubelet[2688]: W0527 03:16:36.695704 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.695837 kubelet[2688]: E0527 03:16:36.695740 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.695987 kubelet[2688]: E0527 03:16:36.695971 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.695987 kubelet[2688]: W0527 03:16:36.695985 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.696112 kubelet[2688]: E0527 03:16:36.695996 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.696237 kubelet[2688]: E0527 03:16:36.696223 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.696237 kubelet[2688]: W0527 03:16:36.696235 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.696289 kubelet[2688]: E0527 03:16:36.696245 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.696447 kubelet[2688]: E0527 03:16:36.696434 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.696447 kubelet[2688]: W0527 03:16:36.696444 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.696509 kubelet[2688]: E0527 03:16:36.696452 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.696692 kubelet[2688]: E0527 03:16:36.696670 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.696692 kubelet[2688]: W0527 03:16:36.696683 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.696751 kubelet[2688]: E0527 03:16:36.696692 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.696934 kubelet[2688]: E0527 03:16:36.696912 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.696934 kubelet[2688]: W0527 03:16:36.696928 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.696985 kubelet[2688]: E0527 03:16:36.696938 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.697130 kubelet[2688]: E0527 03:16:36.697116 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.697155 kubelet[2688]: W0527 03:16:36.697129 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.697155 kubelet[2688]: E0527 03:16:36.697139 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.697322 kubelet[2688]: E0527 03:16:36.697309 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.697347 kubelet[2688]: W0527 03:16:36.697321 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.697347 kubelet[2688]: E0527 03:16:36.697331 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.697519 kubelet[2688]: E0527 03:16:36.697505 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.697564 kubelet[2688]: W0527 03:16:36.697518 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.697564 kubelet[2688]: E0527 03:16:36.697528 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.697743 kubelet[2688]: E0527 03:16:36.697729 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.697743 kubelet[2688]: W0527 03:16:36.697741 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.697914 kubelet[2688]: E0527 03:16:36.697751 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.698054 kubelet[2688]: E0527 03:16:36.698041 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.698054 kubelet[2688]: W0527 03:16:36.698053 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.698096 kubelet[2688]: E0527 03:16:36.698062 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.698244 kubelet[2688]: E0527 03:16:36.698232 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.698268 kubelet[2688]: W0527 03:16:36.698243 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.698268 kubelet[2688]: E0527 03:16:36.698252 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.698421 kubelet[2688]: E0527 03:16:36.698408 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.698421 kubelet[2688]: W0527 03:16:36.698419 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.698463 kubelet[2688]: E0527 03:16:36.698428 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.698618 kubelet[2688]: E0527 03:16:36.698605 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.698618 kubelet[2688]: W0527 03:16:36.698616 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.698681 kubelet[2688]: E0527 03:16:36.698626 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.698809 kubelet[2688]: E0527 03:16:36.698797 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.698809 kubelet[2688]: W0527 03:16:36.698808 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.698854 kubelet[2688]: E0527 03:16:36.698816 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.698989 kubelet[2688]: E0527 03:16:36.698976 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.699022 kubelet[2688]: W0527 03:16:36.699000 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.699022 kubelet[2688]: E0527 03:16:36.699010 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.699215 kubelet[2688]: E0527 03:16:36.699199 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.699215 kubelet[2688]: W0527 03:16:36.699213 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.699261 kubelet[2688]: E0527 03:16:36.699224 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.699396 kubelet[2688]: E0527 03:16:36.699383 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.699424 kubelet[2688]: W0527 03:16:36.699396 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.699424 kubelet[2688]: E0527 03:16:36.699405 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.699666 kubelet[2688]: E0527 03:16:36.699651 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.699666 kubelet[2688]: W0527 03:16:36.699664 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.699726 kubelet[2688]: E0527 03:16:36.699674 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.699895 kubelet[2688]: E0527 03:16:36.699880 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.699895 kubelet[2688]: W0527 03:16:36.699892 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.699951 kubelet[2688]: E0527 03:16:36.699903 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.700101 kubelet[2688]: E0527 03:16:36.700087 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.700101 kubelet[2688]: W0527 03:16:36.700100 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.700188 kubelet[2688]: E0527 03:16:36.700109 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.700454 kubelet[2688]: E0527 03:16:36.700433 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.700454 kubelet[2688]: W0527 03:16:36.700448 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.700601 kubelet[2688]: E0527 03:16:36.700460 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.700754 kubelet[2688]: E0527 03:16:36.700736 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.700754 kubelet[2688]: W0527 03:16:36.700750 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.700884 kubelet[2688]: E0527 03:16:36.700762 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.701023 kubelet[2688]: E0527 03:16:36.701004 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.701023 kubelet[2688]: W0527 03:16:36.701017 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.701115 kubelet[2688]: E0527 03:16:36.701027 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.701273 kubelet[2688]: E0527 03:16:36.701254 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.701273 kubelet[2688]: W0527 03:16:36.701266 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.701273 kubelet[2688]: E0527 03:16:36.701275 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.708050 kubelet[2688]: E0527 03:16:36.708027 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:36.708050 kubelet[2688]: W0527 03:16:36.708043 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:36.708162 kubelet[2688]: E0527 03:16:36.708060 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:36.749087 containerd[1578]: time="2025-05-27T03:16:36.748584119Z" level=info msg="connecting to shim 8abde44b47a3ff30ef6979ac57c63297646f041afbb87a7d27ff0b5793e0d348" address="unix:///run/containerd/s/fc75794c359490b7e7dfba7a504f9f7820515003c7ca013ba9d5ea69aa8c0a94" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:36.749406 containerd[1578]: time="2025-05-27T03:16:36.749372223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qpfhl,Uid:e2dcf363-46c5-4268-bce2-96ef6752cc79,Namespace:calico-system,Attempt:0,}" May 27 03:16:36.779760 systemd[1]: Started cri-containerd-8abde44b47a3ff30ef6979ac57c63297646f041afbb87a7d27ff0b5793e0d348.scope - libcontainer container 8abde44b47a3ff30ef6979ac57c63297646f041afbb87a7d27ff0b5793e0d348. May 27 03:16:36.780048 containerd[1578]: time="2025-05-27T03:16:36.779993825Z" level=info msg="connecting to shim 3addaa27bfae671aafb3284d676c1ee8ffd1bdd9ef0e53a0273749d0ee2fe7fc" address="unix:///run/containerd/s/6c4165184de43606955b3803ba03c9d4998186f0914312f8ccb9dc160b587d60" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:36.812059 systemd[1]: Started cri-containerd-3addaa27bfae671aafb3284d676c1ee8ffd1bdd9ef0e53a0273749d0ee2fe7fc.scope - libcontainer container 3addaa27bfae671aafb3284d676c1ee8ffd1bdd9ef0e53a0273749d0ee2fe7fc. May 27 03:16:36.832609 containerd[1578]: time="2025-05-27T03:16:36.832566868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5796bfdc9f-zqx9q,Uid:aedc04ca-7099-43ca-a2cf-5885835c9c78,Namespace:calico-system,Attempt:0,} returns sandbox id \"8abde44b47a3ff30ef6979ac57c63297646f041afbb87a7d27ff0b5793e0d348\"" May 27 03:16:36.834005 containerd[1578]: time="2025-05-27T03:16:36.833968617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:16:36.853165 containerd[1578]: time="2025-05-27T03:16:36.853110289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qpfhl,Uid:e2dcf363-46c5-4268-bce2-96ef6752cc79,Namespace:calico-system,Attempt:0,} returns sandbox id \"3addaa27bfae671aafb3284d676c1ee8ffd1bdd9ef0e53a0273749d0ee2fe7fc\"" May 27 03:16:38.782805 kubelet[2688]: E0527 03:16:38.782714 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ccgt" podUID="4f76a0d4-a083-4947-aa3e-c00e6bb39edf" May 27 03:16:39.328632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377299639.mount: Deactivated successfully. May 27 03:16:40.033558 containerd[1578]: time="2025-05-27T03:16:40.033466553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:40.034250 containerd[1578]: time="2025-05-27T03:16:40.034176088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 03:16:40.035234 containerd[1578]: time="2025-05-27T03:16:40.035185376Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:40.037091 containerd[1578]: time="2025-05-27T03:16:40.037059301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:40.037672 containerd[1578]: time="2025-05-27T03:16:40.037641536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 3.203618086s" May 27 03:16:40.037729 containerd[1578]: time="2025-05-27T03:16:40.037674949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:16:40.038903 containerd[1578]: time="2025-05-27T03:16:40.038854217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:16:40.050371 containerd[1578]: time="2025-05-27T03:16:40.050320964Z" level=info msg="CreateContainer within sandbox \"8abde44b47a3ff30ef6979ac57c63297646f041afbb87a7d27ff0b5793e0d348\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:16:40.056727 containerd[1578]: time="2025-05-27T03:16:40.056681578Z" level=info msg="Container e7a2cc00bae75ad0a0068665af2e70d541c94bc123b3ed2a4f642f96f636d37d: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:40.065681 containerd[1578]: time="2025-05-27T03:16:40.065633785Z" level=info msg="CreateContainer within sandbox \"8abde44b47a3ff30ef6979ac57c63297646f041afbb87a7d27ff0b5793e0d348\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e7a2cc00bae75ad0a0068665af2e70d541c94bc123b3ed2a4f642f96f636d37d\"" May 27 03:16:40.066800 containerd[1578]: time="2025-05-27T03:16:40.066759383Z" level=info msg="StartContainer for \"e7a2cc00bae75ad0a0068665af2e70d541c94bc123b3ed2a4f642f96f636d37d\"" May 27 03:16:40.069571 containerd[1578]: time="2025-05-27T03:16:40.068478556Z" level=info msg="connecting to shim e7a2cc00bae75ad0a0068665af2e70d541c94bc123b3ed2a4f642f96f636d37d" address="unix:///run/containerd/s/fc75794c359490b7e7dfba7a504f9f7820515003c7ca013ba9d5ea69aa8c0a94" protocol=ttrpc version=3 May 27 03:16:40.125807 systemd[1]: Started cri-containerd-e7a2cc00bae75ad0a0068665af2e70d541c94bc123b3ed2a4f642f96f636d37d.scope - libcontainer container e7a2cc00bae75ad0a0068665af2e70d541c94bc123b3ed2a4f642f96f636d37d. May 27 03:16:40.176450 containerd[1578]: time="2025-05-27T03:16:40.176375176Z" level=info msg="StartContainer for \"e7a2cc00bae75ad0a0068665af2e70d541c94bc123b3ed2a4f642f96f636d37d\" returns successfully" May 27 03:16:40.782803 kubelet[2688]: E0527 03:16:40.782723 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ccgt" podUID="4f76a0d4-a083-4947-aa3e-c00e6bb39edf" May 27 03:16:40.910019 kubelet[2688]: E0527 03:16:40.909972 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.910019 kubelet[2688]: W0527 03:16:40.910005 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.910232 kubelet[2688]: E0527 03:16:40.910035 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.910232 kubelet[2688]: E0527 03:16:40.910224 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.910280 kubelet[2688]: W0527 03:16:40.910236 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.910280 kubelet[2688]: E0527 03:16:40.910247 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.910492 kubelet[2688]: E0527 03:16:40.910461 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.910492 kubelet[2688]: W0527 03:16:40.910474 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.910492 kubelet[2688]: E0527 03:16:40.910485 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.910769 kubelet[2688]: E0527 03:16:40.910753 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.910769 kubelet[2688]: W0527 03:16:40.910764 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.910853 kubelet[2688]: E0527 03:16:40.910776 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.911038 kubelet[2688]: E0527 03:16:40.911013 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.911038 kubelet[2688]: W0527 03:16:40.911026 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.911038 kubelet[2688]: E0527 03:16:40.911037 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.911233 kubelet[2688]: E0527 03:16:40.911209 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.911233 kubelet[2688]: W0527 03:16:40.911221 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.911233 kubelet[2688]: E0527 03:16:40.911232 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.911447 kubelet[2688]: E0527 03:16:40.911426 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.911447 kubelet[2688]: W0527 03:16:40.911438 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.911447 kubelet[2688]: E0527 03:16:40.911449 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.911692 kubelet[2688]: E0527 03:16:40.911670 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.911692 kubelet[2688]: W0527 03:16:40.911683 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.911789 kubelet[2688]: E0527 03:16:40.911693 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.911922 kubelet[2688]: E0527 03:16:40.911900 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.911922 kubelet[2688]: W0527 03:16:40.911913 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.912012 kubelet[2688]: E0527 03:16:40.911924 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.912132 kubelet[2688]: E0527 03:16:40.912108 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.912132 kubelet[2688]: W0527 03:16:40.912121 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.912132 kubelet[2688]: E0527 03:16:40.912131 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.912340 kubelet[2688]: E0527 03:16:40.912319 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.912340 kubelet[2688]: W0527 03:16:40.912331 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.912340 kubelet[2688]: E0527 03:16:40.912342 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.912624 kubelet[2688]: E0527 03:16:40.912601 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.912624 kubelet[2688]: W0527 03:16:40.912614 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.912624 kubelet[2688]: E0527 03:16:40.912625 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.912842 kubelet[2688]: E0527 03:16:40.912821 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.912842 kubelet[2688]: W0527 03:16:40.912833 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.912842 kubelet[2688]: E0527 03:16:40.912845 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.913055 kubelet[2688]: E0527 03:16:40.913032 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.913055 kubelet[2688]: W0527 03:16:40.913044 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.913055 kubelet[2688]: E0527 03:16:40.913054 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.913267 kubelet[2688]: E0527 03:16:40.913244 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.913267 kubelet[2688]: W0527 03:16:40.913256 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.913267 kubelet[2688]: E0527 03:16:40.913267 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.928836 kubelet[2688]: E0527 03:16:40.928788 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.928836 kubelet[2688]: W0527 03:16:40.928812 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.928836 kubelet[2688]: E0527 03:16:40.928833 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.929058 kubelet[2688]: E0527 03:16:40.929032 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.929058 kubelet[2688]: W0527 03:16:40.929041 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.929058 kubelet[2688]: E0527 03:16:40.929049 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.929390 kubelet[2688]: E0527 03:16:40.929357 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.929390 kubelet[2688]: W0527 03:16:40.929384 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.929444 kubelet[2688]: E0527 03:16:40.929404 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.929623 kubelet[2688]: E0527 03:16:40.929602 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.929623 kubelet[2688]: W0527 03:16:40.929612 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.929623 kubelet[2688]: E0527 03:16:40.929620 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.929910 kubelet[2688]: E0527 03:16:40.929772 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.929910 kubelet[2688]: W0527 03:16:40.929781 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.929910 kubelet[2688]: E0527 03:16:40.929907 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.930132 kubelet[2688]: E0527 03:16:40.930117 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.930132 kubelet[2688]: W0527 03:16:40.930127 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.930187 kubelet[2688]: E0527 03:16:40.930135 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.930467 kubelet[2688]: E0527 03:16:40.930441 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.930467 kubelet[2688]: W0527 03:16:40.930460 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.930605 kubelet[2688]: E0527 03:16:40.930474 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.930763 kubelet[2688]: E0527 03:16:40.930740 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.930763 kubelet[2688]: W0527 03:16:40.930752 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.930763 kubelet[2688]: E0527 03:16:40.930760 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.930938 kubelet[2688]: E0527 03:16:40.930918 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.930938 kubelet[2688]: W0527 03:16:40.930929 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.930938 kubelet[2688]: E0527 03:16:40.930936 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.931109 kubelet[2688]: E0527 03:16:40.931088 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.931109 kubelet[2688]: W0527 03:16:40.931098 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.931109 kubelet[2688]: E0527 03:16:40.931106 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.931308 kubelet[2688]: E0527 03:16:40.931289 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.931308 kubelet[2688]: W0527 03:16:40.931298 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.931308 kubelet[2688]: E0527 03:16:40.931306 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.931615 kubelet[2688]: E0527 03:16:40.931582 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.931615 kubelet[2688]: W0527 03:16:40.931606 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.931663 kubelet[2688]: E0527 03:16:40.931617 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.931846 kubelet[2688]: E0527 03:16:40.931825 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.931846 kubelet[2688]: W0527 03:16:40.931837 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.931898 kubelet[2688]: E0527 03:16:40.931847 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.932039 kubelet[2688]: E0527 03:16:40.932018 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.932039 kubelet[2688]: W0527 03:16:40.932030 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.932080 kubelet[2688]: E0527 03:16:40.932038 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.932234 kubelet[2688]: E0527 03:16:40.932214 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.932234 kubelet[2688]: W0527 03:16:40.932225 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.932290 kubelet[2688]: E0527 03:16:40.932234 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.932437 kubelet[2688]: E0527 03:16:40.932416 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.932437 kubelet[2688]: W0527 03:16:40.932428 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.932488 kubelet[2688]: E0527 03:16:40.932438 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.932713 kubelet[2688]: E0527 03:16:40.932696 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.932713 kubelet[2688]: W0527 03:16:40.932707 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.932776 kubelet[2688]: E0527 03:16:40.932715 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:40.932882 kubelet[2688]: E0527 03:16:40.932868 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:16:40.932882 kubelet[2688]: W0527 03:16:40.932877 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:16:40.932926 kubelet[2688]: E0527 03:16:40.932884 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:16:41.276331 containerd[1578]: time="2025-05-27T03:16:41.276256236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:41.276965 containerd[1578]: time="2025-05-27T03:16:41.276919033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 03:16:41.278151 containerd[1578]: time="2025-05-27T03:16:41.278116905Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:41.280189 containerd[1578]: time="2025-05-27T03:16:41.280154267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:41.280902 containerd[1578]: time="2025-05-27T03:16:41.280870473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.241984277s" May 27 03:16:41.280938 containerd[1578]: time="2025-05-27T03:16:41.280901331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:16:41.285323 containerd[1578]: time="2025-05-27T03:16:41.285284825Z" level=info msg="CreateContainer within sandbox \"3addaa27bfae671aafb3284d676c1ee8ffd1bdd9ef0e53a0273749d0ee2fe7fc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:16:41.294785 containerd[1578]: time="2025-05-27T03:16:41.294723655Z" level=info msg="Container 7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:41.303479 containerd[1578]: time="2025-05-27T03:16:41.303432293Z" level=info msg="CreateContainer within sandbox \"3addaa27bfae671aafb3284d676c1ee8ffd1bdd9ef0e53a0273749d0ee2fe7fc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24\"" May 27 03:16:41.303992 containerd[1578]: time="2025-05-27T03:16:41.303956879Z" level=info msg="StartContainer for \"7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24\"" May 27 03:16:41.305379 containerd[1578]: time="2025-05-27T03:16:41.305312338Z" level=info msg="connecting to shim 7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24" address="unix:///run/containerd/s/6c4165184de43606955b3803ba03c9d4998186f0914312f8ccb9dc160b587d60" protocol=ttrpc version=3 May 27 03:16:41.331686 systemd[1]: Started cri-containerd-7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24.scope - libcontainer container 7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24. May 27 03:16:41.377801 containerd[1578]: time="2025-05-27T03:16:41.377744480Z" level=info msg="StartContainer for \"7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24\" returns successfully" May 27 03:16:41.388254 systemd[1]: cri-containerd-7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24.scope: Deactivated successfully. May 27 03:16:41.390273 containerd[1578]: time="2025-05-27T03:16:41.390198571Z" level=info msg="received exit event container_id:\"7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24\" id:\"7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24\" pid:3403 exited_at:{seconds:1748315801 nanos:389435606}" May 27 03:16:41.390461 containerd[1578]: time="2025-05-27T03:16:41.390377627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24\" id:\"7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24\" pid:3403 exited_at:{seconds:1748315801 nanos:389435606}" May 27 03:16:41.418288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a4d11fe029c91740eb45216d4288ef16bf40f6f9a51d2e2e0e00b9853367a24-rootfs.mount: Deactivated successfully. May 27 03:16:41.870088 kubelet[2688]: I0527 03:16:41.870048 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:16:41.871637 containerd[1578]: time="2025-05-27T03:16:41.871589451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:16:41.972462 kubelet[2688]: I0527 03:16:41.972344 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5796bfdc9f-zqx9q" podStartSLOduration=2.767351364 podStartE2EDuration="5.972325581s" podCreationTimestamp="2025-05-27 03:16:36 +0000 UTC" firstStartedPulling="2025-05-27 03:16:36.833706914 +0000 UTC m=+20.141512179" lastFinishedPulling="2025-05-27 03:16:40.038681131 +0000 UTC m=+23.346486396" observedRunningTime="2025-05-27 03:16:40.877531373 +0000 UTC m=+24.185336648" watchObservedRunningTime="2025-05-27 03:16:41.972325581 +0000 UTC m=+25.280130846" May 27 03:16:42.782186 kubelet[2688]: E0527 03:16:42.782099 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ccgt" podUID="4f76a0d4-a083-4947-aa3e-c00e6bb39edf" May 27 03:16:44.781798 kubelet[2688]: E0527 03:16:44.781744 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ccgt" podUID="4f76a0d4-a083-4947-aa3e-c00e6bb39edf" May 27 03:16:45.883821 containerd[1578]: time="2025-05-27T03:16:45.883743294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:45.884561 containerd[1578]: time="2025-05-27T03:16:45.884475781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:16:45.885747 containerd[1578]: time="2025-05-27T03:16:45.885690273Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:45.888747 containerd[1578]: time="2025-05-27T03:16:45.888693028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:45.889257 containerd[1578]: time="2025-05-27T03:16:45.889220699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.017587516s" May 27 03:16:45.889309 containerd[1578]: time="2025-05-27T03:16:45.889259592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:16:45.894524 containerd[1578]: time="2025-05-27T03:16:45.894467820Z" level=info msg="CreateContainer within sandbox \"3addaa27bfae671aafb3284d676c1ee8ffd1bdd9ef0e53a0273749d0ee2fe7fc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:16:45.906578 containerd[1578]: time="2025-05-27T03:16:45.906075267Z" level=info msg="Container 3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:45.919930 containerd[1578]: time="2025-05-27T03:16:45.919875062Z" level=info msg="CreateContainer within sandbox \"3addaa27bfae671aafb3284d676c1ee8ffd1bdd9ef0e53a0273749d0ee2fe7fc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01\"" May 27 03:16:45.920495 containerd[1578]: time="2025-05-27T03:16:45.920469650Z" level=info msg="StartContainer for \"3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01\"" May 27 03:16:45.922038 containerd[1578]: time="2025-05-27T03:16:45.921997481Z" level=info msg="connecting to shim 3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01" address="unix:///run/containerd/s/6c4165184de43606955b3803ba03c9d4998186f0914312f8ccb9dc160b587d60" protocol=ttrpc version=3 May 27 03:16:45.944720 systemd[1]: Started cri-containerd-3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01.scope - libcontainer container 3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01. May 27 03:16:45.996690 containerd[1578]: time="2025-05-27T03:16:45.996628357Z" level=info msg="StartContainer for \"3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01\" returns successfully" May 27 03:16:46.782942 kubelet[2688]: E0527 03:16:46.782819 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9ccgt" podUID="4f76a0d4-a083-4947-aa3e-c00e6bb39edf" May 27 03:16:47.115529 systemd[1]: cri-containerd-3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01.scope: Deactivated successfully. May 27 03:16:47.117677 systemd[1]: cri-containerd-3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01.scope: Consumed 796ms CPU time, 176.1M memory peak, 3.3M read from disk, 170.9M written to disk. May 27 03:16:47.118911 containerd[1578]: time="2025-05-27T03:16:47.118859549Z" level=info msg="received exit event container_id:\"3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01\" id:\"3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01\" pid:3463 exited_at:{seconds:1748315807 nanos:117947475}" May 27 03:16:47.119469 containerd[1578]: time="2025-05-27T03:16:47.119216780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01\" id:\"3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01\" pid:3463 exited_at:{seconds:1748315807 nanos:117947475}" May 27 03:16:47.147238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ddf957bd76970bcd6c9f526074b524c1c3b3e2b438dfa7fb8dd10bed000fd01-rootfs.mount: Deactivated successfully. May 27 03:16:47.182122 kubelet[2688]: I0527 03:16:47.182038 2688 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:16:47.557641 systemd[1]: Created slice kubepods-burstable-podbbf03748_7489_419f_912b_a5d0b65caa63.slice - libcontainer container kubepods-burstable-podbbf03748_7489_419f_912b_a5d0b65caa63.slice. May 27 03:16:47.573796 systemd[1]: Created slice kubepods-besteffort-pod21aef915_9ac2_4858_b6ff_396dbfcdc7d5.slice - libcontainer container kubepods-besteffort-pod21aef915_9ac2_4858_b6ff_396dbfcdc7d5.slice. May 27 03:16:47.579496 kubelet[2688]: I0527 03:16:47.579159 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf03748-7489-419f-912b-a5d0b65caa63-config-volume\") pod \"coredns-674b8bbfcf-wzs98\" (UID: \"bbf03748-7489-419f-912b-a5d0b65caa63\") " pod="kube-system/coredns-674b8bbfcf-wzs98" May 27 03:16:47.579496 kubelet[2688]: I0527 03:16:47.579200 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96c159fe-d07b-4a1a-92e7-57ea94582ec6-calico-apiserver-certs\") pod \"calico-apiserver-6fcb869445-ns9tb\" (UID: \"96c159fe-d07b-4a1a-92e7-57ea94582ec6\") " pod="calico-apiserver/calico-apiserver-6fcb869445-ns9tb" May 27 03:16:47.579496 kubelet[2688]: I0527 03:16:47.579222 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4785g\" (UniqueName: \"kubernetes.io/projected/bbf03748-7489-419f-912b-a5d0b65caa63-kube-api-access-4785g\") pod \"coredns-674b8bbfcf-wzs98\" (UID: \"bbf03748-7489-419f-912b-a5d0b65caa63\") " pod="kube-system/coredns-674b8bbfcf-wzs98" May 27 03:16:47.579496 kubelet[2688]: I0527 03:16:47.579236 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4080b429-c6ee-45b0-9def-70e180093c71-config-volume\") pod \"coredns-674b8bbfcf-4rtxw\" (UID: \"4080b429-c6ee-45b0-9def-70e180093c71\") " pod="kube-system/coredns-674b8bbfcf-4rtxw" May 27 03:16:47.579496 kubelet[2688]: I0527 03:16:47.579251 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq6f9\" (UniqueName: \"kubernetes.io/projected/4080b429-c6ee-45b0-9def-70e180093c71-kube-api-access-hq6f9\") pod \"coredns-674b8bbfcf-4rtxw\" (UID: \"4080b429-c6ee-45b0-9def-70e180093c71\") " pod="kube-system/coredns-674b8bbfcf-4rtxw" May 27 03:16:47.579703 kubelet[2688]: I0527 03:16:47.579265 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9s22\" (UniqueName: \"kubernetes.io/projected/96c159fe-d07b-4a1a-92e7-57ea94582ec6-kube-api-access-k9s22\") pod \"calico-apiserver-6fcb869445-ns9tb\" (UID: \"96c159fe-d07b-4a1a-92e7-57ea94582ec6\") " pod="calico-apiserver/calico-apiserver-6fcb869445-ns9tb" May 27 03:16:47.579703 kubelet[2688]: I0527 03:16:47.579280 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21aef915-9ac2-4858-b6ff-396dbfcdc7d5-tigera-ca-bundle\") pod \"calico-kube-controllers-84868df674-nwzjl\" (UID: \"21aef915-9ac2-4858-b6ff-396dbfcdc7d5\") " pod="calico-system/calico-kube-controllers-84868df674-nwzjl" May 27 03:16:47.579703 kubelet[2688]: I0527 03:16:47.579300 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggd7b\" (UniqueName: \"kubernetes.io/projected/21aef915-9ac2-4858-b6ff-396dbfcdc7d5-kube-api-access-ggd7b\") pod \"calico-kube-controllers-84868df674-nwzjl\" (UID: \"21aef915-9ac2-4858-b6ff-396dbfcdc7d5\") " pod="calico-system/calico-kube-controllers-84868df674-nwzjl" May 27 03:16:47.579703 kubelet[2688]: I0527 03:16:47.579334 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9319b5b9-b72a-46d4-bde6-100d6b9745f2-calico-apiserver-certs\") pod \"calico-apiserver-6fcb869445-7f5xw\" (UID: \"9319b5b9-b72a-46d4-bde6-100d6b9745f2\") " pod="calico-apiserver/calico-apiserver-6fcb869445-7f5xw" May 27 03:16:47.579703 kubelet[2688]: I0527 03:16:47.579347 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446cz\" (UniqueName: \"kubernetes.io/projected/9319b5b9-b72a-46d4-bde6-100d6b9745f2-kube-api-access-446cz\") pod \"calico-apiserver-6fcb869445-7f5xw\" (UID: \"9319b5b9-b72a-46d4-bde6-100d6b9745f2\") " pod="calico-apiserver/calico-apiserver-6fcb869445-7f5xw" May 27 03:16:47.587670 systemd[1]: Created slice kubepods-burstable-pod4080b429_c6ee_45b0_9def_70e180093c71.slice - libcontainer container kubepods-burstable-pod4080b429_c6ee_45b0_9def_70e180093c71.slice. May 27 03:16:47.595942 systemd[1]: Created slice kubepods-besteffort-pod9319b5b9_b72a_46d4_bde6_100d6b9745f2.slice - libcontainer container kubepods-besteffort-pod9319b5b9_b72a_46d4_bde6_100d6b9745f2.slice. May 27 03:16:47.601821 systemd[1]: Created slice kubepods-besteffort-pod96c159fe_d07b_4a1a_92e7_57ea94582ec6.slice - libcontainer container kubepods-besteffort-pod96c159fe_d07b_4a1a_92e7_57ea94582ec6.slice. May 27 03:16:47.608853 systemd[1]: Created slice kubepods-besteffort-podf7cc3cd0_7029_43d4_906b_b70321fb5fb6.slice - libcontainer container kubepods-besteffort-podf7cc3cd0_7029_43d4_906b_b70321fb5fb6.slice. May 27 03:16:47.616811 systemd[1]: Created slice kubepods-besteffort-pod9ff6044c_a1cf_41a0_9830_a2555119f01d.slice - libcontainer container kubepods-besteffort-pod9ff6044c_a1cf_41a0_9830_a2555119f01d.slice. May 27 03:16:47.680539 kubelet[2688]: I0527 03:16:47.680116 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ff6044c-a1cf-41a0-9830-a2555119f01d-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-qbl7d\" (UID: \"9ff6044c-a1cf-41a0-9830-a2555119f01d\") " pod="calico-system/goldmane-78d55f7ddc-qbl7d" May 27 03:16:47.680539 kubelet[2688]: I0527 03:16:47.680224 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff6044c-a1cf-41a0-9830-a2555119f01d-config\") pod \"goldmane-78d55f7ddc-qbl7d\" (UID: \"9ff6044c-a1cf-41a0-9830-a2555119f01d\") " pod="calico-system/goldmane-78d55f7ddc-qbl7d" May 27 03:16:47.680539 kubelet[2688]: I0527 03:16:47.680294 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-whisker-backend-key-pair\") pod \"whisker-6f5dd5454f-m64f7\" (UID: \"f7cc3cd0-7029-43d4-906b-b70321fb5fb6\") " pod="calico-system/whisker-6f5dd5454f-m64f7" May 27 03:16:47.680539 kubelet[2688]: I0527 03:16:47.680318 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-whisker-ca-bundle\") pod \"whisker-6f5dd5454f-m64f7\" (UID: \"f7cc3cd0-7029-43d4-906b-b70321fb5fb6\") " pod="calico-system/whisker-6f5dd5454f-m64f7" May 27 03:16:47.680539 kubelet[2688]: I0527 03:16:47.680350 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9ff6044c-a1cf-41a0-9830-a2555119f01d-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-qbl7d\" (UID: \"9ff6044c-a1cf-41a0-9830-a2555119f01d\") " pod="calico-system/goldmane-78d55f7ddc-qbl7d" May 27 03:16:47.680875 kubelet[2688]: I0527 03:16:47.680372 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glzqw\" (UniqueName: \"kubernetes.io/projected/9ff6044c-a1cf-41a0-9830-a2555119f01d-kube-api-access-glzqw\") pod \"goldmane-78d55f7ddc-qbl7d\" (UID: \"9ff6044c-a1cf-41a0-9830-a2555119f01d\") " pod="calico-system/goldmane-78d55f7ddc-qbl7d" May 27 03:16:47.680875 kubelet[2688]: I0527 03:16:47.680419 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lq58\" (UniqueName: \"kubernetes.io/projected/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-kube-api-access-9lq58\") pod \"whisker-6f5dd5454f-m64f7\" (UID: \"f7cc3cd0-7029-43d4-906b-b70321fb5fb6\") " pod="calico-system/whisker-6f5dd5454f-m64f7" May 27 03:16:47.865939 containerd[1578]: time="2025-05-27T03:16:47.865776214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wzs98,Uid:bbf03748-7489-419f-912b-a5d0b65caa63,Namespace:kube-system,Attempt:0,}" May 27 03:16:47.882002 containerd[1578]: time="2025-05-27T03:16:47.881932832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84868df674-nwzjl,Uid:21aef915-9ac2-4858-b6ff-396dbfcdc7d5,Namespace:calico-system,Attempt:0,}" May 27 03:16:47.893564 containerd[1578]: time="2025-05-27T03:16:47.893494297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rtxw,Uid:4080b429-c6ee-45b0-9def-70e180093c71,Namespace:kube-system,Attempt:0,}" May 27 03:16:47.895974 containerd[1578]: time="2025-05-27T03:16:47.895921658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:16:47.902184 containerd[1578]: time="2025-05-27T03:16:47.902124392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb869445-7f5xw,Uid:9319b5b9-b72a-46d4-bde6-100d6b9745f2,Namespace:calico-apiserver,Attempt:0,}" May 27 03:16:47.906999 containerd[1578]: time="2025-05-27T03:16:47.906949910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb869445-ns9tb,Uid:96c159fe-d07b-4a1a-92e7-57ea94582ec6,Namespace:calico-apiserver,Attempt:0,}" May 27 03:16:47.917108 containerd[1578]: time="2025-05-27T03:16:47.916912480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f5dd5454f-m64f7,Uid:f7cc3cd0-7029-43d4-906b-b70321fb5fb6,Namespace:calico-system,Attempt:0,}" May 27 03:16:47.920134 containerd[1578]: time="2025-05-27T03:16:47.920094388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-qbl7d,Uid:9ff6044c-a1cf-41a0-9830-a2555119f01d,Namespace:calico-system,Attempt:0,}" May 27 03:16:48.039028 containerd[1578]: time="2025-05-27T03:16:48.038963265Z" level=error msg="Failed to destroy network for sandbox \"236da55cd27b7fe9eda671a2d288d6e7d0db57618168e3bb2bde996b4836ccb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.047338 containerd[1578]: time="2025-05-27T03:16:48.044090197Z" level=error msg="Failed to destroy network for sandbox \"35a0df4a19abe44c98c0fa1d409b6332a91940c7381801f5e1ec972fdd3fe819\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.050526 containerd[1578]: time="2025-05-27T03:16:48.050376228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84868df674-nwzjl,Uid:21aef915-9ac2-4858-b6ff-396dbfcdc7d5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"236da55cd27b7fe9eda671a2d288d6e7d0db57618168e3bb2bde996b4836ccb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.062087 containerd[1578]: time="2025-05-27T03:16:48.062036805Z" level=error msg="Failed to destroy network for sandbox \"a5738acffde8bc85dae9b61fc32f3aff2d1bc85a3c69d8a912e4fa3d65b8c977\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.066313 kubelet[2688]: E0527 03:16:48.066205 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236da55cd27b7fe9eda671a2d288d6e7d0db57618168e3bb2bde996b4836ccb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.066988 kubelet[2688]: E0527 03:16:48.066370 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236da55cd27b7fe9eda671a2d288d6e7d0db57618168e3bb2bde996b4836ccb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84868df674-nwzjl" May 27 03:16:48.066988 kubelet[2688]: E0527 03:16:48.066421 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236da55cd27b7fe9eda671a2d288d6e7d0db57618168e3bb2bde996b4836ccb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84868df674-nwzjl" May 27 03:16:48.066988 kubelet[2688]: E0527 03:16:48.066520 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84868df674-nwzjl_calico-system(21aef915-9ac2-4858-b6ff-396dbfcdc7d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84868df674-nwzjl_calico-system(21aef915-9ac2-4858-b6ff-396dbfcdc7d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"236da55cd27b7fe9eda671a2d288d6e7d0db57618168e3bb2bde996b4836ccb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84868df674-nwzjl" podUID="21aef915-9ac2-4858-b6ff-396dbfcdc7d5" May 27 03:16:48.070043 containerd[1578]: time="2025-05-27T03:16:48.069999185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wzs98,Uid:bbf03748-7489-419f-912b-a5d0b65caa63,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a0df4a19abe44c98c0fa1d409b6332a91940c7381801f5e1ec972fdd3fe819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.070486 kubelet[2688]: E0527 03:16:48.070424 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a0df4a19abe44c98c0fa1d409b6332a91940c7381801f5e1ec972fdd3fe819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.070609 kubelet[2688]: E0527 03:16:48.070588 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a0df4a19abe44c98c0fa1d409b6332a91940c7381801f5e1ec972fdd3fe819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wzs98" May 27 03:16:48.070654 kubelet[2688]: E0527 03:16:48.070615 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a0df4a19abe44c98c0fa1d409b6332a91940c7381801f5e1ec972fdd3fe819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wzs98" May 27 03:16:48.070713 kubelet[2688]: E0527 03:16:48.070686 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wzs98_kube-system(bbf03748-7489-419f-912b-a5d0b65caa63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wzs98_kube-system(bbf03748-7489-419f-912b-a5d0b65caa63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35a0df4a19abe44c98c0fa1d409b6332a91940c7381801f5e1ec972fdd3fe819\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wzs98" podUID="bbf03748-7489-419f-912b-a5d0b65caa63" May 27 03:16:48.077840 containerd[1578]: time="2025-05-27T03:16:48.077778450Z" level=error msg="Failed to destroy network for sandbox \"e31244044464b1ac6c0342089f7a00c882db01552cf39d8851de3dc15c2ce1b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.080376 containerd[1578]: time="2025-05-27T03:16:48.080314786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rtxw,Uid:4080b429-c6ee-45b0-9def-70e180093c71,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5738acffde8bc85dae9b61fc32f3aff2d1bc85a3c69d8a912e4fa3d65b8c977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.080882 kubelet[2688]: E0527 03:16:48.080842 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5738acffde8bc85dae9b61fc32f3aff2d1bc85a3c69d8a912e4fa3d65b8c977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.080959 kubelet[2688]: E0527 03:16:48.080911 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5738acffde8bc85dae9b61fc32f3aff2d1bc85a3c69d8a912e4fa3d65b8c977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4rtxw" May 27 03:16:48.080959 kubelet[2688]: E0527 03:16:48.080933 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5738acffde8bc85dae9b61fc32f3aff2d1bc85a3c69d8a912e4fa3d65b8c977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4rtxw" May 27 03:16:48.081019 kubelet[2688]: E0527 03:16:48.080982 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4rtxw_kube-system(4080b429-c6ee-45b0-9def-70e180093c71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4rtxw_kube-system(4080b429-c6ee-45b0-9def-70e180093c71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5738acffde8bc85dae9b61fc32f3aff2d1bc85a3c69d8a912e4fa3d65b8c977\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4rtxw" podUID="4080b429-c6ee-45b0-9def-70e180093c71" May 27 03:16:48.082323 containerd[1578]: time="2025-05-27T03:16:48.082112103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-qbl7d,Uid:9ff6044c-a1cf-41a0-9830-a2555119f01d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e31244044464b1ac6c0342089f7a00c882db01552cf39d8851de3dc15c2ce1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.082407 kubelet[2688]: E0527 03:16:48.082354 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e31244044464b1ac6c0342089f7a00c882db01552cf39d8851de3dc15c2ce1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.082407 kubelet[2688]: E0527 03:16:48.082382 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e31244044464b1ac6c0342089f7a00c882db01552cf39d8851de3dc15c2ce1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-qbl7d" May 27 03:16:48.082407 kubelet[2688]: E0527 03:16:48.082397 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e31244044464b1ac6c0342089f7a00c882db01552cf39d8851de3dc15c2ce1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-qbl7d" May 27 03:16:48.082517 kubelet[2688]: E0527 03:16:48.082432 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-qbl7d_calico-system(9ff6044c-a1cf-41a0-9830-a2555119f01d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-qbl7d_calico-system(9ff6044c-a1cf-41a0-9830-a2555119f01d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e31244044464b1ac6c0342089f7a00c882db01552cf39d8851de3dc15c2ce1b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-qbl7d" podUID="9ff6044c-a1cf-41a0-9830-a2555119f01d" May 27 03:16:48.085375 containerd[1578]: time="2025-05-27T03:16:48.085332042Z" level=error msg="Failed to destroy network for sandbox \"5436db5680efca7acabd928504fab91f95e15295c8e35453c5b8ae273a709d2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.088122 containerd[1578]: time="2025-05-27T03:16:48.087986840Z" level=error msg="Failed to destroy network for sandbox \"cc99aeea9ff1c6767f6c0d91a2f60c3286f7cb255755c50e5d72524393381bca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.088746 containerd[1578]: time="2025-05-27T03:16:48.088702204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb869445-7f5xw,Uid:9319b5b9-b72a-46d4-bde6-100d6b9745f2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5436db5680efca7acabd928504fab91f95e15295c8e35453c5b8ae273a709d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.089034 kubelet[2688]: E0527 03:16:48.088983 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5436db5680efca7acabd928504fab91f95e15295c8e35453c5b8ae273a709d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.089092 kubelet[2688]: E0527 03:16:48.089047 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5436db5680efca7acabd928504fab91f95e15295c8e35453c5b8ae273a709d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcb869445-7f5xw" May 27 03:16:48.089092 kubelet[2688]: E0527 03:16:48.089066 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5436db5680efca7acabd928504fab91f95e15295c8e35453c5b8ae273a709d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcb869445-7f5xw" May 27 03:16:48.089158 kubelet[2688]: E0527 03:16:48.089113 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fcb869445-7f5xw_calico-apiserver(9319b5b9-b72a-46d4-bde6-100d6b9745f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fcb869445-7f5xw_calico-apiserver(9319b5b9-b72a-46d4-bde6-100d6b9745f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5436db5680efca7acabd928504fab91f95e15295c8e35453c5b8ae273a709d2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcb869445-7f5xw" podUID="9319b5b9-b72a-46d4-bde6-100d6b9745f2" May 27 03:16:48.090772 containerd[1578]: time="2025-05-27T03:16:48.090701199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb869445-ns9tb,Uid:96c159fe-d07b-4a1a-92e7-57ea94582ec6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc99aeea9ff1c6767f6c0d91a2f60c3286f7cb255755c50e5d72524393381bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.090977 kubelet[2688]: E0527 03:16:48.090942 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc99aeea9ff1c6767f6c0d91a2f60c3286f7cb255755c50e5d72524393381bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.091117 kubelet[2688]: E0527 03:16:48.091078 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc99aeea9ff1c6767f6c0d91a2f60c3286f7cb255755c50e5d72524393381bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcb869445-ns9tb" May 27 03:16:48.091117 kubelet[2688]: E0527 03:16:48.091121 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc99aeea9ff1c6767f6c0d91a2f60c3286f7cb255755c50e5d72524393381bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcb869445-ns9tb" May 27 03:16:48.091408 kubelet[2688]: E0527 03:16:48.091226 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fcb869445-ns9tb_calico-apiserver(96c159fe-d07b-4a1a-92e7-57ea94582ec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fcb869445-ns9tb_calico-apiserver(96c159fe-d07b-4a1a-92e7-57ea94582ec6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc99aeea9ff1c6767f6c0d91a2f60c3286f7cb255755c50e5d72524393381bca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcb869445-ns9tb" podUID="96c159fe-d07b-4a1a-92e7-57ea94582ec6" May 27 03:16:48.101080 containerd[1578]: time="2025-05-27T03:16:48.100998316Z" level=error msg="Failed to destroy network for sandbox \"23190aad9ff1c8ee600423b8232075331d186802d6022987e381ec07eeea0d87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.102535 containerd[1578]: time="2025-05-27T03:16:48.102484267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f5dd5454f-m64f7,Uid:f7cc3cd0-7029-43d4-906b-b70321fb5fb6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23190aad9ff1c8ee600423b8232075331d186802d6022987e381ec07eeea0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.102775 kubelet[2688]: E0527 03:16:48.102732 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23190aad9ff1c8ee600423b8232075331d186802d6022987e381ec07eeea0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.102862 kubelet[2688]: E0527 03:16:48.102787 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23190aad9ff1c8ee600423b8232075331d186802d6022987e381ec07eeea0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f5dd5454f-m64f7" May 27 03:16:48.102862 kubelet[2688]: E0527 03:16:48.102813 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23190aad9ff1c8ee600423b8232075331d186802d6022987e381ec07eeea0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f5dd5454f-m64f7" May 27 03:16:48.102926 kubelet[2688]: E0527 03:16:48.102867 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f5dd5454f-m64f7_calico-system(f7cc3cd0-7029-43d4-906b-b70321fb5fb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f5dd5454f-m64f7_calico-system(f7cc3cd0-7029-43d4-906b-b70321fb5fb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23190aad9ff1c8ee600423b8232075331d186802d6022987e381ec07eeea0d87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f5dd5454f-m64f7" podUID="f7cc3cd0-7029-43d4-906b-b70321fb5fb6" May 27 03:16:48.791390 systemd[1]: Created slice kubepods-besteffort-pod4f76a0d4_a083_4947_aa3e_c00e6bb39edf.slice - libcontainer container kubepods-besteffort-pod4f76a0d4_a083_4947_aa3e_c00e6bb39edf.slice. May 27 03:16:48.793876 containerd[1578]: time="2025-05-27T03:16:48.793841155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9ccgt,Uid:4f76a0d4-a083-4947-aa3e-c00e6bb39edf,Namespace:calico-system,Attempt:0,}" May 27 03:16:48.843799 containerd[1578]: time="2025-05-27T03:16:48.843739574Z" level=error msg="Failed to destroy network for sandbox \"cececa6af1e6d1c812a429fc98d87a87c570923f8066940a95c624a74091e395\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.845708 containerd[1578]: time="2025-05-27T03:16:48.845647397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9ccgt,Uid:4f76a0d4-a083-4947-aa3e-c00e6bb39edf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cececa6af1e6d1c812a429fc98d87a87c570923f8066940a95c624a74091e395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.845926 kubelet[2688]: E0527 03:16:48.845880 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cececa6af1e6d1c812a429fc98d87a87c570923f8066940a95c624a74091e395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:16:48.845976 kubelet[2688]: E0527 03:16:48.845947 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cececa6af1e6d1c812a429fc98d87a87c570923f8066940a95c624a74091e395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9ccgt" May 27 03:16:48.845976 kubelet[2688]: E0527 03:16:48.845968 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cececa6af1e6d1c812a429fc98d87a87c570923f8066940a95c624a74091e395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9ccgt" May 27 03:16:48.846065 kubelet[2688]: E0527 03:16:48.846018 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9ccgt_calico-system(4f76a0d4-a083-4947-aa3e-c00e6bb39edf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9ccgt_calico-system(4f76a0d4-a083-4947-aa3e-c00e6bb39edf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cececa6af1e6d1c812a429fc98d87a87c570923f8066940a95c624a74091e395\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9ccgt" podUID="4f76a0d4-a083-4947-aa3e-c00e6bb39edf" May 27 03:16:48.846357 systemd[1]: run-netns-cni\x2d8d611c31\x2d4e6d\x2d5b70\x2d2345\x2d2c97c93693d7.mount: Deactivated successfully. May 27 03:16:52.946004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1723204105.mount: Deactivated successfully. May 27 03:16:54.122612 containerd[1578]: time="2025-05-27T03:16:54.122519434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:54.123594 containerd[1578]: time="2025-05-27T03:16:54.123521395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:16:54.125112 containerd[1578]: time="2025-05-27T03:16:54.125072107Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:54.127160 containerd[1578]: time="2025-05-27T03:16:54.127119281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:54.127641 containerd[1578]: time="2025-05-27T03:16:54.127610152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.231651375s" May 27 03:16:54.127680 containerd[1578]: time="2025-05-27T03:16:54.127642113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:16:54.152016 containerd[1578]: time="2025-05-27T03:16:54.151957212Z" level=info msg="CreateContainer within sandbox \"3addaa27bfae671aafb3284d676c1ee8ffd1bdd9ef0e53a0273749d0ee2fe7fc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:16:54.161766 containerd[1578]: time="2025-05-27T03:16:54.161704305Z" level=info msg="Container 387286f37b512239ed6bff242c002f8ea5f77d5f558c7d24cba3c11c19059e85: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:54.174286 containerd[1578]: time="2025-05-27T03:16:54.174225407Z" level=info msg="CreateContainer within sandbox \"3addaa27bfae671aafb3284d676c1ee8ffd1bdd9ef0e53a0273749d0ee2fe7fc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"387286f37b512239ed6bff242c002f8ea5f77d5f558c7d24cba3c11c19059e85\"" May 27 03:16:54.175033 containerd[1578]: time="2025-05-27T03:16:54.174845601Z" level=info msg="StartContainer for \"387286f37b512239ed6bff242c002f8ea5f77d5f558c7d24cba3c11c19059e85\"" May 27 03:16:54.176462 containerd[1578]: time="2025-05-27T03:16:54.176427563Z" level=info msg="connecting to shim 387286f37b512239ed6bff242c002f8ea5f77d5f558c7d24cba3c11c19059e85" address="unix:///run/containerd/s/6c4165184de43606955b3803ba03c9d4998186f0914312f8ccb9dc160b587d60" protocol=ttrpc version=3 May 27 03:16:54.208801 systemd[1]: Started cri-containerd-387286f37b512239ed6bff242c002f8ea5f77d5f558c7d24cba3c11c19059e85.scope - libcontainer container 387286f37b512239ed6bff242c002f8ea5f77d5f558c7d24cba3c11c19059e85. May 27 03:16:54.351580 containerd[1578]: time="2025-05-27T03:16:54.350846267Z" level=info msg="StartContainer for \"387286f37b512239ed6bff242c002f8ea5f77d5f558c7d24cba3c11c19059e85\" returns successfully" May 27 03:16:54.389089 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:16:54.389226 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:16:54.525958 kubelet[2688]: I0527 03:16:54.525912 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-whisker-backend-key-pair\") pod \"f7cc3cd0-7029-43d4-906b-b70321fb5fb6\" (UID: \"f7cc3cd0-7029-43d4-906b-b70321fb5fb6\") " May 27 03:16:54.527584 kubelet[2688]: I0527 03:16:54.526775 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-whisker-ca-bundle\") pod \"f7cc3cd0-7029-43d4-906b-b70321fb5fb6\" (UID: \"f7cc3cd0-7029-43d4-906b-b70321fb5fb6\") " May 27 03:16:54.527584 kubelet[2688]: I0527 03:16:54.526849 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lq58\" (UniqueName: \"kubernetes.io/projected/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-kube-api-access-9lq58\") pod \"f7cc3cd0-7029-43d4-906b-b70321fb5fb6\" (UID: \"f7cc3cd0-7029-43d4-906b-b70321fb5fb6\") " May 27 03:16:54.533704 kubelet[2688]: I0527 03:16:54.533648 2688 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f7cc3cd0-7029-43d4-906b-b70321fb5fb6" (UID: "f7cc3cd0-7029-43d4-906b-b70321fb5fb6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:16:54.539145 kubelet[2688]: I0527 03:16:54.539074 2688 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f7cc3cd0-7029-43d4-906b-b70321fb5fb6" (UID: "f7cc3cd0-7029-43d4-906b-b70321fb5fb6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:16:54.539458 kubelet[2688]: I0527 03:16:54.539394 2688 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-kube-api-access-9lq58" (OuterVolumeSpecName: "kube-api-access-9lq58") pod "f7cc3cd0-7029-43d4-906b-b70321fb5fb6" (UID: "f7cc3cd0-7029-43d4-906b-b70321fb5fb6"). InnerVolumeSpecName "kube-api-access-9lq58". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:16:54.628260 kubelet[2688]: I0527 03:16:54.628201 2688 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 03:16:54.628260 kubelet[2688]: I0527 03:16:54.628242 2688 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 03:16:54.628260 kubelet[2688]: I0527 03:16:54.628250 2688 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9lq58\" (UniqueName: \"kubernetes.io/projected/f7cc3cd0-7029-43d4-906b-b70321fb5fb6-kube-api-access-9lq58\") on node \"localhost\" DevicePath \"\"" May 27 03:16:54.790604 systemd[1]: Removed slice kubepods-besteffort-podf7cc3cd0_7029_43d4_906b_b70321fb5fb6.slice - libcontainer container kubepods-besteffort-podf7cc3cd0_7029_43d4_906b_b70321fb5fb6.slice. May 27 03:16:54.931475 kubelet[2688]: I0527 03:16:54.931387 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qpfhl" podStartSLOduration=1.6572137740000001 podStartE2EDuration="18.931354558s" podCreationTimestamp="2025-05-27 03:16:36 +0000 UTC" firstStartedPulling="2025-05-27 03:16:36.854289419 +0000 UTC m=+20.162094684" lastFinishedPulling="2025-05-27 03:16:54.128430203 +0000 UTC m=+37.436235468" observedRunningTime="2025-05-27 03:16:54.930990073 +0000 UTC m=+38.238795358" watchObservedRunningTime="2025-05-27 03:16:54.931354558 +0000 UTC m=+38.239159823" May 27 03:16:54.986059 systemd[1]: Created slice kubepods-besteffort-pod1bffbfb7_7888_4b4e_9c96_fa797074a5c4.slice - libcontainer container kubepods-besteffort-pod1bffbfb7_7888_4b4e_9c96_fa797074a5c4.slice. May 27 03:16:55.031688 kubelet[2688]: I0527 03:16:55.031642 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1bffbfb7-7888-4b4e-9c96-fa797074a5c4-whisker-backend-key-pair\") pod \"whisker-bf6dc74bf-gpwx9\" (UID: \"1bffbfb7-7888-4b4e-9c96-fa797074a5c4\") " pod="calico-system/whisker-bf6dc74bf-gpwx9" May 27 03:16:55.031688 kubelet[2688]: I0527 03:16:55.031702 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bffbfb7-7888-4b4e-9c96-fa797074a5c4-whisker-ca-bundle\") pod \"whisker-bf6dc74bf-gpwx9\" (UID: \"1bffbfb7-7888-4b4e-9c96-fa797074a5c4\") " pod="calico-system/whisker-bf6dc74bf-gpwx9" May 27 03:16:55.031688 kubelet[2688]: I0527 03:16:55.031717 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcjqf\" (UniqueName: \"kubernetes.io/projected/1bffbfb7-7888-4b4e-9c96-fa797074a5c4-kube-api-access-lcjqf\") pod \"whisker-bf6dc74bf-gpwx9\" (UID: \"1bffbfb7-7888-4b4e-9c96-fa797074a5c4\") " pod="calico-system/whisker-bf6dc74bf-gpwx9" May 27 03:16:55.135699 systemd[1]: var-lib-kubelet-pods-f7cc3cd0\x2d7029\x2d43d4\x2d906b\x2db70321fb5fb6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9lq58.mount: Deactivated successfully. May 27 03:16:55.135833 systemd[1]: var-lib-kubelet-pods-f7cc3cd0\x2d7029\x2d43d4\x2d906b\x2db70321fb5fb6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:16:55.293140 containerd[1578]: time="2025-05-27T03:16:55.292804802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bf6dc74bf-gpwx9,Uid:1bffbfb7-7888-4b4e-9c96-fa797074a5c4,Namespace:calico-system,Attempt:0,}" May 27 03:16:55.615186 systemd-networkd[1493]: cali365fc593f3d: Link UP May 27 03:16:55.615781 systemd-networkd[1493]: cali365fc593f3d: Gained carrier May 27 03:16:55.630856 containerd[1578]: 2025-05-27 03:16:55.392 [INFO][3849] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:16:55.630856 containerd[1578]: 2025-05-27 03:16:55.441 [INFO][3849] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0 whisker-bf6dc74bf- calico-system 1bffbfb7-7888-4b4e-9c96-fa797074a5c4 916 0 2025-05-27 03:16:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bf6dc74bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-bf6dc74bf-gpwx9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali365fc593f3d [] [] }} ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Namespace="calico-system" Pod="whisker-bf6dc74bf-gpwx9" WorkloadEndpoint="localhost-k8s-whisker--bf6dc74bf--gpwx9-" May 27 03:16:55.630856 containerd[1578]: 2025-05-27 03:16:55.441 [INFO][3849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Namespace="calico-system" Pod="whisker-bf6dc74bf-gpwx9" WorkloadEndpoint="localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0" May 27 03:16:55.630856 containerd[1578]: 2025-05-27 03:16:55.512 [INFO][3863] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" HandleID="k8s-pod-network.a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Workload="localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0" May 27 03:16:55.631199 containerd[1578]: 2025-05-27 03:16:55.513 [INFO][3863] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" HandleID="k8s-pod-network.a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Workload="localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f5470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-bf6dc74bf-gpwx9", "timestamp":"2025-05-27 03:16:55.512855396 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:16:55.631199 containerd[1578]: 2025-05-27 03:16:55.513 [INFO][3863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:16:55.631199 containerd[1578]: 2025-05-27 03:16:55.514 [INFO][3863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:16:55.631199 containerd[1578]: 2025-05-27 03:16:55.514 [INFO][3863] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:16:55.631199 containerd[1578]: 2025-05-27 03:16:55.523 [INFO][3863] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" host="localhost" May 27 03:16:55.631199 containerd[1578]: 2025-05-27 03:16:55.530 [INFO][3863] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:16:55.631199 containerd[1578]: 2025-05-27 03:16:55.535 [INFO][3863] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:16:55.631199 containerd[1578]: 2025-05-27 03:16:55.537 [INFO][3863] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:16:55.631199 containerd[1578]: 2025-05-27 03:16:55.539 [INFO][3863] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:16:55.631199 containerd[1578]: 2025-05-27 03:16:55.539 [INFO][3863] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" host="localhost" May 27 03:16:55.631476 containerd[1578]: 2025-05-27 03:16:55.540 [INFO][3863] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525 May 27 03:16:55.631476 containerd[1578]: 2025-05-27 03:16:55.597 [INFO][3863] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" host="localhost" May 27 03:16:55.631476 containerd[1578]: 2025-05-27 03:16:55.603 [INFO][3863] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" host="localhost" May 27 03:16:55.631476 containerd[1578]: 2025-05-27 03:16:55.603 [INFO][3863] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" host="localhost" May 27 03:16:55.631476 containerd[1578]: 2025-05-27 03:16:55.603 [INFO][3863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:16:55.631476 containerd[1578]: 2025-05-27 03:16:55.603 [INFO][3863] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" HandleID="k8s-pod-network.a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Workload="localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0" May 27 03:16:55.631684 containerd[1578]: 2025-05-27 03:16:55.607 [INFO][3849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Namespace="calico-system" Pod="whisker-bf6dc74bf-gpwx9" WorkloadEndpoint="localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0", GenerateName:"whisker-bf6dc74bf-", Namespace:"calico-system", SelfLink:"", UID:"1bffbfb7-7888-4b4e-9c96-fa797074a5c4", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bf6dc74bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-bf6dc74bf-gpwx9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali365fc593f3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:16:55.631684 containerd[1578]: 2025-05-27 03:16:55.607 [INFO][3849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Namespace="calico-system" Pod="whisker-bf6dc74bf-gpwx9" WorkloadEndpoint="localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0" May 27 03:16:55.631785 containerd[1578]: 2025-05-27 03:16:55.607 [INFO][3849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali365fc593f3d ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Namespace="calico-system" Pod="whisker-bf6dc74bf-gpwx9" WorkloadEndpoint="localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0" May 27 03:16:55.631785 containerd[1578]: 2025-05-27 03:16:55.615 [INFO][3849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Namespace="calico-system" Pod="whisker-bf6dc74bf-gpwx9" WorkloadEndpoint="localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0" May 27 03:16:55.631844 containerd[1578]: 2025-05-27 03:16:55.617 [INFO][3849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Namespace="calico-system" Pod="whisker-bf6dc74bf-gpwx9" WorkloadEndpoint="localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0", GenerateName:"whisker-bf6dc74bf-", Namespace:"calico-system", SelfLink:"", UID:"1bffbfb7-7888-4b4e-9c96-fa797074a5c4", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bf6dc74bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525", Pod:"whisker-bf6dc74bf-gpwx9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali365fc593f3d", MAC:"9e:98:17:36:e7:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:16:55.631921 containerd[1578]: 2025-05-27 03:16:55.626 [INFO][3849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" Namespace="calico-system" Pod="whisker-bf6dc74bf-gpwx9" WorkloadEndpoint="localhost-k8s-whisker--bf6dc74bf--gpwx9-eth0" May 27 03:16:55.915004 kubelet[2688]: I0527 03:16:55.914862 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:16:55.944018 containerd[1578]: time="2025-05-27T03:16:55.943937247Z" level=info msg="connecting to shim a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525" address="unix:///run/containerd/s/a903c2cbf0a51c19f85f16a956c76673bc9fe55ad4dc692610d3c4aeff2903a2" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:55.974818 systemd[1]: Started cri-containerd-a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525.scope - libcontainer container a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525. May 27 03:16:55.990918 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:16:56.026815 containerd[1578]: time="2025-05-27T03:16:56.026773333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bf6dc74bf-gpwx9,Uid:1bffbfb7-7888-4b4e-9c96-fa797074a5c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1cc85c6887412572be3398ad389d7c58ade754a88e3697fd5686eacd02d5525\"" May 27 03:16:56.031494 containerd[1578]: time="2025-05-27T03:16:56.031456806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:16:56.056185 kubelet[2688]: I0527 03:16:56.056139 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:16:56.131250 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:53268.service - OpenSSH per-connection server daemon (10.0.0.1:53268). May 27 03:16:56.189765 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 53268 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:16:56.191744 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:16:56.197349 systemd-logind[1565]: New session 8 of user core. May 27 03:16:56.208679 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:16:56.269701 containerd[1578]: time="2025-05-27T03:16:56.269646550Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:16:56.270798 containerd[1578]: time="2025-05-27T03:16:56.270746506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:16:56.277927 containerd[1578]: time="2025-05-27T03:16:56.277817963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:16:56.278302 kubelet[2688]: E0527 03:16:56.278239 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:16:56.278409 kubelet[2688]: E0527 03:16:56.278314 2688 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:16:56.285126 kubelet[2688]: E0527 03:16:56.285052 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6eb246ae1e7a46c38a021552b652d8fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lcjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf6dc74bf-gpwx9_calico-system(1bffbfb7-7888-4b4e-9c96-fa797074a5c4): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:16:56.287781 containerd[1578]: time="2025-05-27T03:16:56.287744780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:16:56.367157 sshd[4027]: Connection closed by 10.0.0.1 port 53268 May 27 03:16:56.367527 sshd-session[4024]: pam_unix(sshd:session): session closed for user core May 27 03:16:56.372319 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:53268.service: Deactivated successfully. May 27 03:16:56.374458 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:16:56.375230 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. May 27 03:16:56.376529 systemd-logind[1565]: Removed session 8. May 27 03:16:56.517740 containerd[1578]: time="2025-05-27T03:16:56.517657775Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:16:56.519090 containerd[1578]: time="2025-05-27T03:16:56.519013069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:16:56.519341 containerd[1578]: time="2025-05-27T03:16:56.519084964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:16:56.519385 kubelet[2688]: E0527 03:16:56.519308 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:16:56.519502 kubelet[2688]: E0527 03:16:56.519378 2688 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:16:56.519579 kubelet[2688]: E0527 03:16:56.519509 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf6dc74bf-gpwx9_calico-system(1bffbfb7-7888-4b4e-9c96-fa797074a5c4): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:16:56.521029 kubelet[2688]: E0527 03:16:56.520962 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-bf6dc74bf-gpwx9" podUID="1bffbfb7-7888-4b4e-9c96-fa797074a5c4" May 27 03:16:56.785960 kubelet[2688]: I0527 03:16:56.785792 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7cc3cd0-7029-43d4-906b-b70321fb5fb6" path="/var/lib/kubelet/pods/f7cc3cd0-7029-43d4-906b-b70321fb5fb6/volumes" May 27 03:16:56.924469 kubelet[2688]: E0527 03:16:56.923011 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-bf6dc74bf-gpwx9" podUID="1bffbfb7-7888-4b4e-9c96-fa797074a5c4" May 27 03:16:57.368779 systemd-networkd[1493]: cali365fc593f3d: Gained IPv6LL May 27 03:16:57.423755 kubelet[2688]: I0527 03:16:57.423690 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:16:57.433665 systemd-networkd[1493]: vxlan.calico: Link UP May 27 03:16:57.433676 systemd-networkd[1493]: vxlan.calico: Gained carrier May 27 03:16:57.570826 containerd[1578]: time="2025-05-27T03:16:57.570764842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"387286f37b512239ed6bff242c002f8ea5f77d5f558c7d24cba3c11c19059e85\" id:\"72474d8811894c65ec3a4b02bf698d516d2ff9ee08bd2620d5360cca12811c29\" pid:4163 exit_status:1 exited_at:{seconds:1748315817 nanos:570425736}" May 27 03:16:57.667512 containerd[1578]: time="2025-05-27T03:16:57.667310451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"387286f37b512239ed6bff242c002f8ea5f77d5f558c7d24cba3c11c19059e85\" id:\"b8d6cb4dc1da233465188e60583c4975a2b0da0a3e4bb04879377516e91da2a0\" pid:4187 exit_status:1 exited_at:{seconds:1748315817 nanos:666893267}" May 27 03:16:57.927353 kubelet[2688]: E0527 03:16:57.927135 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-bf6dc74bf-gpwx9" podUID="1bffbfb7-7888-4b4e-9c96-fa797074a5c4" May 27 03:16:58.584847 systemd-networkd[1493]: vxlan.calico: Gained IPv6LL May 27 03:16:59.782612 containerd[1578]: time="2025-05-27T03:16:59.782542365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9ccgt,Uid:4f76a0d4-a083-4947-aa3e-c00e6bb39edf,Namespace:calico-system,Attempt:0,}" May 27 03:16:59.783059 containerd[1578]: time="2025-05-27T03:16:59.782612707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84868df674-nwzjl,Uid:21aef915-9ac2-4858-b6ff-396dbfcdc7d5,Namespace:calico-system,Attempt:0,}" May 27 03:16:59.900616 systemd-networkd[1493]: cali532d481b9e8: Link UP May 27 03:16:59.901582 systemd-networkd[1493]: cali532d481b9e8: Gained carrier May 27 03:16:59.911819 containerd[1578]: 2025-05-27 03:16:59.836 [INFO][4237] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9ccgt-eth0 csi-node-driver- calico-system 4f76a0d4-a083-4947-aa3e-c00e6bb39edf 733 0 2025-05-27 03:16:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9ccgt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali532d481b9e8 [] [] }} ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Namespace="calico-system" Pod="csi-node-driver-9ccgt" WorkloadEndpoint="localhost-k8s-csi--node--driver--9ccgt-" May 27 03:16:59.911819 containerd[1578]: 2025-05-27 03:16:59.836 [INFO][4237] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Namespace="calico-system" Pod="csi-node-driver-9ccgt" WorkloadEndpoint="localhost-k8s-csi--node--driver--9ccgt-eth0" May 27 03:16:59.911819 containerd[1578]: 2025-05-27 03:16:59.864 [INFO][4264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" HandleID="k8s-pod-network.9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Workload="localhost-k8s-csi--node--driver--9ccgt-eth0" May 27 03:16:59.912026 containerd[1578]: 2025-05-27 03:16:59.864 [INFO][4264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" HandleID="k8s-pod-network.9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Workload="localhost-k8s-csi--node--driver--9ccgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e57a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9ccgt", "timestamp":"2025-05-27 03:16:59.864365205 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:16:59.912026 containerd[1578]: 2025-05-27 03:16:59.864 [INFO][4264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:16:59.912026 containerd[1578]: 2025-05-27 03:16:59.864 [INFO][4264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:16:59.912026 containerd[1578]: 2025-05-27 03:16:59.864 [INFO][4264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:16:59.912026 containerd[1578]: 2025-05-27 03:16:59.871 [INFO][4264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" host="localhost" May 27 03:16:59.912026 containerd[1578]: 2025-05-27 03:16:59.877 [INFO][4264] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:16:59.912026 containerd[1578]: 2025-05-27 03:16:59.881 [INFO][4264] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:16:59.912026 containerd[1578]: 2025-05-27 03:16:59.882 [INFO][4264] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:16:59.912026 containerd[1578]: 2025-05-27 03:16:59.884 [INFO][4264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:16:59.912026 containerd[1578]: 2025-05-27 03:16:59.884 [INFO][4264] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" host="localhost" May 27 03:16:59.912250 containerd[1578]: 2025-05-27 03:16:59.885 [INFO][4264] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e May 27 03:16:59.912250 containerd[1578]: 2025-05-27 03:16:59.888 [INFO][4264] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" host="localhost" May 27 03:16:59.912250 containerd[1578]: 2025-05-27 03:16:59.894 [INFO][4264] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" host="localhost" May 27 03:16:59.912250 containerd[1578]: 2025-05-27 03:16:59.894 [INFO][4264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" host="localhost" May 27 03:16:59.912250 containerd[1578]: 2025-05-27 03:16:59.894 [INFO][4264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:16:59.912250 containerd[1578]: 2025-05-27 03:16:59.894 [INFO][4264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" HandleID="k8s-pod-network.9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Workload="localhost-k8s-csi--node--driver--9ccgt-eth0" May 27 03:16:59.912378 containerd[1578]: 2025-05-27 03:16:59.897 [INFO][4237] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Namespace="calico-system" Pod="csi-node-driver-9ccgt" WorkloadEndpoint="localhost-k8s-csi--node--driver--9ccgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9ccgt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f76a0d4-a083-4947-aa3e-c00e6bb39edf", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9ccgt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali532d481b9e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:16:59.912428 containerd[1578]: 2025-05-27 03:16:59.897 [INFO][4237] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Namespace="calico-system" Pod="csi-node-driver-9ccgt" WorkloadEndpoint="localhost-k8s-csi--node--driver--9ccgt-eth0" May 27 03:16:59.912428 containerd[1578]: 2025-05-27 03:16:59.897 [INFO][4237] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali532d481b9e8 ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Namespace="calico-system" Pod="csi-node-driver-9ccgt" WorkloadEndpoint="localhost-k8s-csi--node--driver--9ccgt-eth0" May 27 03:16:59.912428 containerd[1578]: 2025-05-27 03:16:59.901 [INFO][4237] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Namespace="calico-system" Pod="csi-node-driver-9ccgt" WorkloadEndpoint="localhost-k8s-csi--node--driver--9ccgt-eth0" May 27 03:16:59.912506 containerd[1578]: 2025-05-27 03:16:59.901 [INFO][4237] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Namespace="calico-system" Pod="csi-node-driver-9ccgt" WorkloadEndpoint="localhost-k8s-csi--node--driver--9ccgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9ccgt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f76a0d4-a083-4947-aa3e-c00e6bb39edf", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e", Pod:"csi-node-driver-9ccgt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali532d481b9e8", MAC:"8e:e2:59:db:fd:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:16:59.912589 containerd[1578]: 2025-05-27 03:16:59.908 [INFO][4237] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" Namespace="calico-system" Pod="csi-node-driver-9ccgt" WorkloadEndpoint="localhost-k8s-csi--node--driver--9ccgt-eth0" May 27 03:16:59.936516 containerd[1578]: time="2025-05-27T03:16:59.936465453Z" level=info msg="connecting to shim 9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e" address="unix:///run/containerd/s/6275657fced72b90715e6ae1fa21a7a0eea099d53c0d0cb0932db52eb74cdb22" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:59.965713 systemd[1]: Started cri-containerd-9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e.scope - libcontainer container 9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e. May 27 03:16:59.978954 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:16:59.996185 containerd[1578]: time="2025-05-27T03:16:59.996044387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9ccgt,Uid:4f76a0d4-a083-4947-aa3e-c00e6bb39edf,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e\"" May 27 03:17:00.000670 containerd[1578]: time="2025-05-27T03:17:00.000633111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:17:00.009521 systemd-networkd[1493]: calif2a3572f428: Link UP May 27 03:17:00.011176 systemd-networkd[1493]: calif2a3572f428: Gained carrier May 27 03:17:00.023703 containerd[1578]: 2025-05-27 03:16:59.838 [INFO][4247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0 calico-kube-controllers-84868df674- calico-system 21aef915-9ac2-4858-b6ff-396dbfcdc7d5 848 0 2025-05-27 03:16:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84868df674 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84868df674-nwzjl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif2a3572f428 [] [] }} ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Namespace="calico-system" Pod="calico-kube-controllers-84868df674-nwzjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-" May 27 03:17:00.023703 containerd[1578]: 2025-05-27 03:16:59.838 [INFO][4247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Namespace="calico-system" Pod="calico-kube-controllers-84868df674-nwzjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0" May 27 03:17:00.023703 containerd[1578]: 2025-05-27 03:16:59.865 [INFO][4266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" HandleID="k8s-pod-network.ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Workload="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0" May 27 03:17:00.023986 containerd[1578]: 2025-05-27 03:16:59.865 [INFO][4266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" HandleID="k8s-pod-network.ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Workload="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000587bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84868df674-nwzjl", "timestamp":"2025-05-27 03:16:59.864995187 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:00.023986 containerd[1578]: 2025-05-27 03:16:59.865 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:00.023986 containerd[1578]: 2025-05-27 03:16:59.894 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:00.023986 containerd[1578]: 2025-05-27 03:16:59.894 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:00.023986 containerd[1578]: 2025-05-27 03:16:59.974 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" host="localhost" May 27 03:17:00.023986 containerd[1578]: 2025-05-27 03:16:59.980 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:00.023986 containerd[1578]: 2025-05-27 03:16:59.986 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:00.023986 containerd[1578]: 2025-05-27 03:16:59.988 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:00.023986 containerd[1578]: 2025-05-27 03:16:59.990 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:00.023986 containerd[1578]: 2025-05-27 03:16:59.990 [INFO][4266] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" host="localhost" May 27 03:17:00.024196 containerd[1578]: 2025-05-27 03:16:59.992 [INFO][4266] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2 May 27 03:17:00.024196 containerd[1578]: 2025-05-27 03:16:59.997 [INFO][4266] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" host="localhost" May 27 03:17:00.024196 containerd[1578]: 2025-05-27 03:17:00.003 [INFO][4266] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" host="localhost" May 27 03:17:00.024196 containerd[1578]: 2025-05-27 03:17:00.003 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" host="localhost" May 27 03:17:00.024196 containerd[1578]: 2025-05-27 03:17:00.003 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:00.024196 containerd[1578]: 2025-05-27 03:17:00.003 [INFO][4266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" HandleID="k8s-pod-network.ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Workload="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0" May 27 03:17:00.024332 containerd[1578]: 2025-05-27 03:17:00.007 [INFO][4247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Namespace="calico-system" Pod="calico-kube-controllers-84868df674-nwzjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0", GenerateName:"calico-kube-controllers-84868df674-", Namespace:"calico-system", SelfLink:"", UID:"21aef915-9ac2-4858-b6ff-396dbfcdc7d5", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84868df674", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84868df674-nwzjl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2a3572f428", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:00.024390 containerd[1578]: 2025-05-27 03:17:00.007 [INFO][4247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Namespace="calico-system" Pod="calico-kube-controllers-84868df674-nwzjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0" May 27 03:17:00.024390 containerd[1578]: 2025-05-27 03:17:00.007 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2a3572f428 ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Namespace="calico-system" Pod="calico-kube-controllers-84868df674-nwzjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0" May 27 03:17:00.024390 containerd[1578]: 2025-05-27 03:17:00.012 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Namespace="calico-system" Pod="calico-kube-controllers-84868df674-nwzjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0" May 27 03:17:00.024452 containerd[1578]: 2025-05-27 03:17:00.012 [INFO][4247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Namespace="calico-system" Pod="calico-kube-controllers-84868df674-nwzjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0", GenerateName:"calico-kube-controllers-84868df674-", Namespace:"calico-system", SelfLink:"", UID:"21aef915-9ac2-4858-b6ff-396dbfcdc7d5", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84868df674", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2", Pod:"calico-kube-controllers-84868df674-nwzjl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2a3572f428", MAC:"f6:ce:19:25:29:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:00.024503 containerd[1578]: 2025-05-27 03:17:00.019 [INFO][4247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" Namespace="calico-system" Pod="calico-kube-controllers-84868df674-nwzjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84868df674--nwzjl-eth0" May 27 03:17:00.049507 containerd[1578]: time="2025-05-27T03:17:00.049384774Z" level=info msg="connecting to shim ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2" address="unix:///run/containerd/s/902423a32629649467139de0aab35831cfca037b6d8644330f2bd2b35278fbda" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:00.083766 systemd[1]: Started cri-containerd-ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2.scope - libcontainer container ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2. May 27 03:17:00.097204 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:00.127055 containerd[1578]: time="2025-05-27T03:17:00.127002603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84868df674-nwzjl,Uid:21aef915-9ac2-4858-b6ff-396dbfcdc7d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2\"" May 27 03:17:00.782969 containerd[1578]: time="2025-05-27T03:17:00.782896253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wzs98,Uid:bbf03748-7489-419f-912b-a5d0b65caa63,Namespace:kube-system,Attempt:0,}" May 27 03:17:00.892920 systemd-networkd[1493]: calif0b358b82fc: Link UP May 27 03:17:00.893824 systemd-networkd[1493]: calif0b358b82fc: Gained carrier May 27 03:17:00.909658 containerd[1578]: 2025-05-27 03:17:00.827 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wzs98-eth0 coredns-674b8bbfcf- kube-system bbf03748-7489-419f-912b-a5d0b65caa63 847 0 2025-05-27 03:16:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wzs98 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0b358b82fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-wzs98" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wzs98-" May 27 03:17:00.909658 containerd[1578]: 2025-05-27 03:17:00.827 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-wzs98" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wzs98-eth0" May 27 03:17:00.909658 containerd[1578]: 2025-05-27 03:17:00.853 [INFO][4406] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" HandleID="k8s-pod-network.c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Workload="localhost-k8s-coredns--674b8bbfcf--wzs98-eth0" May 27 03:17:00.910031 containerd[1578]: 2025-05-27 03:17:00.853 [INFO][4406] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" HandleID="k8s-pod-network.c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Workload="localhost-k8s-coredns--674b8bbfcf--wzs98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139870), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wzs98", "timestamp":"2025-05-27 03:17:00.853537437 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:00.910031 containerd[1578]: 2025-05-27 03:17:00.853 [INFO][4406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:00.910031 containerd[1578]: 2025-05-27 03:17:00.853 [INFO][4406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:00.910031 containerd[1578]: 2025-05-27 03:17:00.853 [INFO][4406] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:00.910031 containerd[1578]: 2025-05-27 03:17:00.862 [INFO][4406] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" host="localhost" May 27 03:17:00.910031 containerd[1578]: 2025-05-27 03:17:00.867 [INFO][4406] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:00.910031 containerd[1578]: 2025-05-27 03:17:00.871 [INFO][4406] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:00.910031 containerd[1578]: 2025-05-27 03:17:00.873 [INFO][4406] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:00.910031 containerd[1578]: 2025-05-27 03:17:00.875 [INFO][4406] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:00.910031 containerd[1578]: 2025-05-27 03:17:00.875 [INFO][4406] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" host="localhost" May 27 03:17:00.910248 containerd[1578]: 2025-05-27 03:17:00.876 [INFO][4406] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd May 27 03:17:00.910248 containerd[1578]: 2025-05-27 03:17:00.879 [INFO][4406] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" host="localhost" May 27 03:17:00.910248 containerd[1578]: 2025-05-27 03:17:00.886 [INFO][4406] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" host="localhost" May 27 03:17:00.910248 containerd[1578]: 2025-05-27 03:17:00.886 [INFO][4406] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" host="localhost" May 27 03:17:00.910248 containerd[1578]: 2025-05-27 03:17:00.886 [INFO][4406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:00.910248 containerd[1578]: 2025-05-27 03:17:00.886 [INFO][4406] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" HandleID="k8s-pod-network.c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Workload="localhost-k8s-coredns--674b8bbfcf--wzs98-eth0" May 27 03:17:00.910410 containerd[1578]: 2025-05-27 03:17:00.889 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-wzs98" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wzs98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wzs98-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bbf03748-7489-419f-912b-a5d0b65caa63", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wzs98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0b358b82fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:00.910469 containerd[1578]: 2025-05-27 03:17:00.889 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-wzs98" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wzs98-eth0" May 27 03:17:00.910469 containerd[1578]: 2025-05-27 03:17:00.889 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0b358b82fc ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-wzs98" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wzs98-eth0" May 27 03:17:00.910469 containerd[1578]: 2025-05-27 03:17:00.894 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-wzs98" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wzs98-eth0" May 27 03:17:00.910577 containerd[1578]: 2025-05-27 03:17:00.894 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-wzs98" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wzs98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wzs98-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bbf03748-7489-419f-912b-a5d0b65caa63", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd", Pod:"coredns-674b8bbfcf-wzs98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0b358b82fc", MAC:"be:96:fd:ac:f7:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:00.910577 containerd[1578]: 2025-05-27 03:17:00.904 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-wzs98" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wzs98-eth0" May 27 03:17:00.934741 containerd[1578]: time="2025-05-27T03:17:00.934671967Z" level=info msg="connecting to shim c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd" address="unix:///run/containerd/s/d829f71c0bf1536f7e27c256e86b5a0016bbef967b1bccde646398e204aada5c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:00.961691 systemd[1]: Started cri-containerd-c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd.scope - libcontainer container c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd. May 27 03:17:00.976410 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:01.010214 containerd[1578]: time="2025-05-27T03:17:01.010145841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wzs98,Uid:bbf03748-7489-419f-912b-a5d0b65caa63,Namespace:kube-system,Attempt:0,} returns sandbox id \"c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd\"" May 27 03:17:01.026018 containerd[1578]: time="2025-05-27T03:17:01.025917274Z" level=info msg="CreateContainer within sandbox \"c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:17:01.145942 containerd[1578]: time="2025-05-27T03:17:01.145778569Z" level=info msg="Container 2ae5f720d005b2e0db02af809e93b58d3732075c97645a5c8af9636643a9a2f0: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:01.146480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364879302.mount: Deactivated successfully. May 27 03:17:01.214167 containerd[1578]: time="2025-05-27T03:17:01.214085424Z" level=info msg="CreateContainer within sandbox \"c037ed3ed92c6efcd626ceb7a22cb157f30f3d92aee75f3324fb0eef93a505bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ae5f720d005b2e0db02af809e93b58d3732075c97645a5c8af9636643a9a2f0\"" May 27 03:17:01.214757 containerd[1578]: time="2025-05-27T03:17:01.214661435Z" level=info msg="StartContainer for \"2ae5f720d005b2e0db02af809e93b58d3732075c97645a5c8af9636643a9a2f0\"" May 27 03:17:01.215623 containerd[1578]: time="2025-05-27T03:17:01.215592483Z" level=info msg="connecting to shim 2ae5f720d005b2e0db02af809e93b58d3732075c97645a5c8af9636643a9a2f0" address="unix:///run/containerd/s/d829f71c0bf1536f7e27c256e86b5a0016bbef967b1bccde646398e204aada5c" protocol=ttrpc version=3 May 27 03:17:01.239743 systemd[1]: Started cri-containerd-2ae5f720d005b2e0db02af809e93b58d3732075c97645a5c8af9636643a9a2f0.scope - libcontainer container 2ae5f720d005b2e0db02af809e93b58d3732075c97645a5c8af9636643a9a2f0. May 27 03:17:01.290399 containerd[1578]: time="2025-05-27T03:17:01.290345648Z" level=info msg="StartContainer for \"2ae5f720d005b2e0db02af809e93b58d3732075c97645a5c8af9636643a9a2f0\" returns successfully" May 27 03:17:01.390132 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:53282.service - OpenSSH per-connection server daemon (10.0.0.1:53282). May 27 03:17:01.400720 systemd-networkd[1493]: cali532d481b9e8: Gained IPv6LL May 27 03:17:01.457947 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 53282 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:01.462254 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:01.472920 systemd-logind[1565]: New session 9 of user core. May 27 03:17:01.481959 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:17:01.546956 containerd[1578]: time="2025-05-27T03:17:01.546425522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:01.547131 containerd[1578]: time="2025-05-27T03:17:01.547112481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:17:01.548213 containerd[1578]: time="2025-05-27T03:17:01.548171429Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:01.550991 containerd[1578]: time="2025-05-27T03:17:01.550969110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:01.551773 containerd[1578]: time="2025-05-27T03:17:01.551753272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.550754115s" May 27 03:17:01.551883 containerd[1578]: time="2025-05-27T03:17:01.551868989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:17:01.556009 containerd[1578]: time="2025-05-27T03:17:01.555966029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:17:01.580201 containerd[1578]: time="2025-05-27T03:17:01.580158089Z" level=info msg="CreateContainer within sandbox \"9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:17:01.598313 containerd[1578]: time="2025-05-27T03:17:01.598245791Z" level=info msg="Container 2b8fc808b96acaf1c9826879a36867ff17bed4eeda44c62ec71a5d2f2b91ebb5: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:01.608388 containerd[1578]: time="2025-05-27T03:17:01.608346900Z" level=info msg="CreateContainer within sandbox \"9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2b8fc808b96acaf1c9826879a36867ff17bed4eeda44c62ec71a5d2f2b91ebb5\"" May 27 03:17:01.608929 containerd[1578]: time="2025-05-27T03:17:01.608895681Z" level=info msg="StartContainer for \"2b8fc808b96acaf1c9826879a36867ff17bed4eeda44c62ec71a5d2f2b91ebb5\"" May 27 03:17:01.610622 containerd[1578]: time="2025-05-27T03:17:01.610319073Z" level=info msg="connecting to shim 2b8fc808b96acaf1c9826879a36867ff17bed4eeda44c62ec71a5d2f2b91ebb5" address="unix:///run/containerd/s/6275657fced72b90715e6ae1fa21a7a0eea099d53c0d0cb0932db52eb74cdb22" protocol=ttrpc version=3 May 27 03:17:01.635663 sshd[4514]: Connection closed by 10.0.0.1 port 53282 May 27 03:17:01.635852 sshd-session[4505]: pam_unix(sshd:session): session closed for user core May 27 03:17:01.640704 systemd[1]: Started cri-containerd-2b8fc808b96acaf1c9826879a36867ff17bed4eeda44c62ec71a5d2f2b91ebb5.scope - libcontainer container 2b8fc808b96acaf1c9826879a36867ff17bed4eeda44c62ec71a5d2f2b91ebb5. May 27 03:17:01.641182 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:53282.service: Deactivated successfully. May 27 03:17:01.643617 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:17:01.644990 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. May 27 03:17:01.647777 systemd-logind[1565]: Removed session 9. May 27 03:17:01.696661 containerd[1578]: time="2025-05-27T03:17:01.696510548Z" level=info msg="StartContainer for \"2b8fc808b96acaf1c9826879a36867ff17bed4eeda44c62ec71a5d2f2b91ebb5\" returns successfully" May 27 03:17:01.784021 containerd[1578]: time="2025-05-27T03:17:01.783945548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rtxw,Uid:4080b429-c6ee-45b0-9def-70e180093c71,Namespace:kube-system,Attempt:0,}" May 27 03:17:01.885179 systemd-networkd[1493]: cali2c7102ee7f0: Link UP May 27 03:17:01.886119 systemd-networkd[1493]: cali2c7102ee7f0: Gained carrier May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.820 [INFO][4557] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0 coredns-674b8bbfcf- kube-system 4080b429-c6ee-45b0-9def-70e180093c71 850 0 2025-05-27 03:16:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-4rtxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2c7102ee7f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rtxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rtxw-" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.820 [INFO][4557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rtxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.847 [INFO][4572] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" HandleID="k8s-pod-network.358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Workload="localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.847 [INFO][4572] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" HandleID="k8s-pod-network.358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Workload="localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138560), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-4rtxw", "timestamp":"2025-05-27 03:17:01.847047271 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.847 [INFO][4572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.847 [INFO][4572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.847 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.854 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" host="localhost" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.859 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.863 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.865 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.867 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.867 [INFO][4572] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" host="localhost" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.868 [INFO][4572] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.872 [INFO][4572] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" host="localhost" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.877 [INFO][4572] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" host="localhost" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.877 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" host="localhost" May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.877 [INFO][4572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:01.903387 containerd[1578]: 2025-05-27 03:17:01.877 [INFO][4572] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" HandleID="k8s-pod-network.358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Workload="localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0" May 27 03:17:01.903972 containerd[1578]: 2025-05-27 03:17:01.881 [INFO][4557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rtxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4080b429-c6ee-45b0-9def-70e180093c71", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-4rtxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c7102ee7f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:01.903972 containerd[1578]: 2025-05-27 03:17:01.881 [INFO][4557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rtxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0" May 27 03:17:01.903972 containerd[1578]: 2025-05-27 03:17:01.881 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c7102ee7f0 ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rtxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0" May 27 03:17:01.903972 containerd[1578]: 2025-05-27 03:17:01.885 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rtxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0" May 27 03:17:01.903972 containerd[1578]: 2025-05-27 03:17:01.886 [INFO][4557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rtxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4080b429-c6ee-45b0-9def-70e180093c71", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da", Pod:"coredns-674b8bbfcf-4rtxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c7102ee7f0", MAC:"5a:0d:d9:b4:b6:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:01.903972 containerd[1578]: 2025-05-27 03:17:01.895 [INFO][4557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rtxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rtxw-eth0" May 27 03:17:01.912978 systemd-networkd[1493]: calif2a3572f428: Gained IPv6LL May 27 03:17:01.929683 containerd[1578]: time="2025-05-27T03:17:01.929634442Z" level=info msg="connecting to shim 358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da" address="unix:///run/containerd/s/0628d1bd8d49ad27a2d333aa3bab55b013b8249f1ef4aece7ba726b699ba57c3" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:01.959900 systemd[1]: Started cri-containerd-358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da.scope - libcontainer container 358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da. May 27 03:17:01.963186 kubelet[2688]: I0527 03:17:01.963102 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wzs98" podStartSLOduration=39.963078277 podStartE2EDuration="39.963078277s" podCreationTimestamp="2025-05-27 03:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:17:01.960912462 +0000 UTC m=+45.268717727" watchObservedRunningTime="2025-05-27 03:17:01.963078277 +0000 UTC m=+45.270883542" May 27 03:17:01.983951 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:02.019128 containerd[1578]: time="2025-05-27T03:17:02.019072890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rtxw,Uid:4080b429-c6ee-45b0-9def-70e180093c71,Namespace:kube-system,Attempt:0,} returns sandbox id \"358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da\"" May 27 03:17:02.025262 containerd[1578]: time="2025-05-27T03:17:02.025215860Z" level=info msg="CreateContainer within sandbox \"358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:17:02.036503 containerd[1578]: time="2025-05-27T03:17:02.036386728Z" level=info msg="Container 441db14d058dcb90264178d544357e7be2d15f4bdf6fa850890c6d2a17ce7bc6: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:02.041699 systemd-networkd[1493]: calif0b358b82fc: Gained IPv6LL May 27 03:17:02.042138 containerd[1578]: time="2025-05-27T03:17:02.041925945Z" level=info msg="CreateContainer within sandbox \"358978d05d1916536c05437cedcce8589e5ba8ecb21df0cf526d85f821efe9da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"441db14d058dcb90264178d544357e7be2d15f4bdf6fa850890c6d2a17ce7bc6\"" May 27 03:17:02.042869 containerd[1578]: time="2025-05-27T03:17:02.042834460Z" level=info msg="StartContainer for \"441db14d058dcb90264178d544357e7be2d15f4bdf6fa850890c6d2a17ce7bc6\"" May 27 03:17:02.044195 containerd[1578]: time="2025-05-27T03:17:02.044132557Z" level=info msg="connecting to shim 441db14d058dcb90264178d544357e7be2d15f4bdf6fa850890c6d2a17ce7bc6" address="unix:///run/containerd/s/0628d1bd8d49ad27a2d333aa3bab55b013b8249f1ef4aece7ba726b699ba57c3" protocol=ttrpc version=3 May 27 03:17:02.073683 systemd[1]: Started cri-containerd-441db14d058dcb90264178d544357e7be2d15f4bdf6fa850890c6d2a17ce7bc6.scope - libcontainer container 441db14d058dcb90264178d544357e7be2d15f4bdf6fa850890c6d2a17ce7bc6. May 27 03:17:02.108068 containerd[1578]: time="2025-05-27T03:17:02.108023016Z" level=info msg="StartContainer for \"441db14d058dcb90264178d544357e7be2d15f4bdf6fa850890c6d2a17ce7bc6\" returns successfully" May 27 03:17:02.782710 containerd[1578]: time="2025-05-27T03:17:02.782658281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb869445-ns9tb,Uid:96c159fe-d07b-4a1a-92e7-57ea94582ec6,Namespace:calico-apiserver,Attempt:0,}" May 27 03:17:02.783188 containerd[1578]: time="2025-05-27T03:17:02.783138222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-qbl7d,Uid:9ff6044c-a1cf-41a0-9830-a2555119f01d,Namespace:calico-system,Attempt:0,}" May 27 03:17:03.112182 systemd-networkd[1493]: calibec6e7e7a73: Link UP May 27 03:17:03.116235 systemd-networkd[1493]: calibec6e7e7a73: Gained carrier May 27 03:17:03.127754 kubelet[2688]: I0527 03:17:03.127592 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4rtxw" podStartSLOduration=41.127571951 podStartE2EDuration="41.127571951s" podCreationTimestamp="2025-05-27 03:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:17:02.970970026 +0000 UTC m=+46.278775281" watchObservedRunningTime="2025-05-27 03:17:03.127571951 +0000 UTC m=+46.435377216" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.032 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0 goldmane-78d55f7ddc- calico-system 9ff6044c-a1cf-41a0-9830-a2555119f01d 854 0 2025-05-27 03:16:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-qbl7d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibec6e7e7a73 [] [] }} ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-qbl7d" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--qbl7d-" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.032 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-qbl7d" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.063 [INFO][4724] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" HandleID="k8s-pod-network.e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Workload="localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.064 [INFO][4724] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" HandleID="k8s-pod-network.e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Workload="localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000495b90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-qbl7d", "timestamp":"2025-05-27 03:17:03.06392864 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.064 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.064 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.064 [INFO][4724] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.073 [INFO][4724] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" host="localhost" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.082 [INFO][4724] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.087 [INFO][4724] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.088 [INFO][4724] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.091 [INFO][4724] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.091 [INFO][4724] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" host="localhost" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.092 [INFO][4724] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1 May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.097 [INFO][4724] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" host="localhost" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.104 [INFO][4724] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" host="localhost" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.104 [INFO][4724] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" host="localhost" May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.104 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:03.136540 containerd[1578]: 2025-05-27 03:17:03.104 [INFO][4724] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" HandleID="k8s-pod-network.e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Workload="localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0" May 27 03:17:03.138165 containerd[1578]: 2025-05-27 03:17:03.108 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-qbl7d" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"9ff6044c-a1cf-41a0-9830-a2555119f01d", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-qbl7d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibec6e7e7a73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:03.138165 containerd[1578]: 2025-05-27 03:17:03.108 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-qbl7d" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0" May 27 03:17:03.138165 containerd[1578]: 2025-05-27 03:17:03.108 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibec6e7e7a73 ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-qbl7d" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0" May 27 03:17:03.138165 containerd[1578]: 2025-05-27 03:17:03.117 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-qbl7d" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0" May 27 03:17:03.138165 containerd[1578]: 2025-05-27 03:17:03.118 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-qbl7d" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"9ff6044c-a1cf-41a0-9830-a2555119f01d", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1", Pod:"goldmane-78d55f7ddc-qbl7d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibec6e7e7a73", MAC:"16:f4:e0:31:07:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:03.138165 containerd[1578]: 2025-05-27 03:17:03.130 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" Namespace="calico-system" Pod="goldmane-78d55f7ddc-qbl7d" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--qbl7d-eth0" May 27 03:17:03.181301 containerd[1578]: time="2025-05-27T03:17:03.181041026Z" level=info msg="connecting to shim e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1" address="unix:///run/containerd/s/ef285973f6f9ee0d8f8981bd1c68c2570d18a1b85f592c545afdd4663010caae" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:03.232911 systemd[1]: Started cri-containerd-e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1.scope - libcontainer container e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1. May 27 03:17:03.237359 systemd-networkd[1493]: cali76a12713bd1: Link UP May 27 03:17:03.238591 systemd-networkd[1493]: cali76a12713bd1: Gained carrier May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.031 [INFO][4685] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0 calico-apiserver-6fcb869445- calico-apiserver 96c159fe-d07b-4a1a-92e7-57ea94582ec6 851 0 2025-05-27 03:16:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fcb869445 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fcb869445-ns9tb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali76a12713bd1 [] [] }} ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-ns9tb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.031 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-ns9tb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.084 [INFO][4723] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" HandleID="k8s-pod-network.41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Workload="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.084 [INFO][4723] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" HandleID="k8s-pod-network.41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Workload="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a56a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fcb869445-ns9tb", "timestamp":"2025-05-27 03:17:03.084377661 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.084 [INFO][4723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.104 [INFO][4723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.104 [INFO][4723] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.174 [INFO][4723] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" host="localhost" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.183 [INFO][4723] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.188 [INFO][4723] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.190 [INFO][4723] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.193 [INFO][4723] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.193 [INFO][4723] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" host="localhost" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.197 [INFO][4723] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.207 [INFO][4723] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" host="localhost" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.215 [INFO][4723] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" host="localhost" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.216 [INFO][4723] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" host="localhost" May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.216 [INFO][4723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:03.251368 containerd[1578]: 2025-05-27 03:17:03.216 [INFO][4723] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" HandleID="k8s-pod-network.41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Workload="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0" May 27 03:17:03.252995 containerd[1578]: 2025-05-27 03:17:03.229 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-ns9tb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0", GenerateName:"calico-apiserver-6fcb869445-", Namespace:"calico-apiserver", SelfLink:"", UID:"96c159fe-d07b-4a1a-92e7-57ea94582ec6", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb869445", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fcb869445-ns9tb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76a12713bd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:03.252995 containerd[1578]: 2025-05-27 03:17:03.229 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-ns9tb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0" May 27 03:17:03.252995 containerd[1578]: 2025-05-27 03:17:03.229 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76a12713bd1 ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-ns9tb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0" May 27 03:17:03.252995 containerd[1578]: 2025-05-27 03:17:03.239 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-ns9tb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0" May 27 03:17:03.252995 containerd[1578]: 2025-05-27 03:17:03.239 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-ns9tb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0", GenerateName:"calico-apiserver-6fcb869445-", Namespace:"calico-apiserver", SelfLink:"", UID:"96c159fe-d07b-4a1a-92e7-57ea94582ec6", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb869445", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab", Pod:"calico-apiserver-6fcb869445-ns9tb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76a12713bd1", MAC:"1a:2b:5c:8f:46:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:03.252995 containerd[1578]: 2025-05-27 03:17:03.247 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-ns9tb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--ns9tb-eth0" May 27 03:17:03.257787 systemd-networkd[1493]: cali2c7102ee7f0: Gained IPv6LL May 27 03:17:03.268283 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:03.669285 containerd[1578]: time="2025-05-27T03:17:03.669219270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-qbl7d,Uid:9ff6044c-a1cf-41a0-9830-a2555119f01d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e66ceef7982b82c7bea89123bd42972edbcbee458dda1675926c217c779744e1\"" May 27 03:17:03.728772 containerd[1578]: time="2025-05-27T03:17:03.728709266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:03.731427 containerd[1578]: time="2025-05-27T03:17:03.731374539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:17:03.734234 containerd[1578]: time="2025-05-27T03:17:03.734135661Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:03.736768 containerd[1578]: time="2025-05-27T03:17:03.736728487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:03.737211 containerd[1578]: time="2025-05-27T03:17:03.737161920Z" level=info msg="connecting to shim 41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab" address="unix:///run/containerd/s/8fc040d1ef8942f1849f340a7ce055ae999656184d7e1145e34e8d4539b64ca5" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:03.738590 containerd[1578]: time="2025-05-27T03:17:03.738520510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 2.182430829s" May 27 03:17:03.738670 containerd[1578]: time="2025-05-27T03:17:03.738599188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:17:03.740793 containerd[1578]: time="2025-05-27T03:17:03.740760475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:17:03.752800 containerd[1578]: time="2025-05-27T03:17:03.752756790Z" level=info msg="CreateContainer within sandbox \"ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:17:03.762521 containerd[1578]: time="2025-05-27T03:17:03.762467015Z" level=info msg="Container f2e1448d3915c83509078ff6122cf8ef670fd467e2db59b03157214865172b02: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:03.767726 systemd[1]: Started cri-containerd-41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab.scope - libcontainer container 41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab. May 27 03:17:03.779065 containerd[1578]: time="2025-05-27T03:17:03.779017298Z" level=info msg="CreateContainer within sandbox \"ac1c07116913b4213d77f477396855aecb3e820d763999832bdbd79215794ed2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f2e1448d3915c83509078ff6122cf8ef670fd467e2db59b03157214865172b02\"" May 27 03:17:03.781608 containerd[1578]: time="2025-05-27T03:17:03.781233508Z" level=info msg="StartContainer for \"f2e1448d3915c83509078ff6122cf8ef670fd467e2db59b03157214865172b02\"" May 27 03:17:03.781959 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:03.783347 containerd[1578]: time="2025-05-27T03:17:03.783305667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb869445-7f5xw,Uid:9319b5b9-b72a-46d4-bde6-100d6b9745f2,Namespace:calico-apiserver,Attempt:0,}" May 27 03:17:03.783774 containerd[1578]: time="2025-05-27T03:17:03.783745222Z" level=info msg="connecting to shim f2e1448d3915c83509078ff6122cf8ef670fd467e2db59b03157214865172b02" address="unix:///run/containerd/s/902423a32629649467139de0aab35831cfca037b6d8644330f2bd2b35278fbda" protocol=ttrpc version=3 May 27 03:17:03.828616 containerd[1578]: time="2025-05-27T03:17:03.828294916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb869445-ns9tb,Uid:96c159fe-d07b-4a1a-92e7-57ea94582ec6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab\"" May 27 03:17:03.832695 systemd[1]: Started cri-containerd-f2e1448d3915c83509078ff6122cf8ef670fd467e2db59b03157214865172b02.scope - libcontainer container f2e1448d3915c83509078ff6122cf8ef670fd467e2db59b03157214865172b02. May 27 03:17:03.901970 containerd[1578]: time="2025-05-27T03:17:03.901911266Z" level=info msg="StartContainer for \"f2e1448d3915c83509078ff6122cf8ef670fd467e2db59b03157214865172b02\" returns successfully" May 27 03:17:03.909463 systemd-networkd[1493]: cali4e68996c8c5: Link UP May 27 03:17:03.911657 systemd-networkd[1493]: cali4e68996c8c5: Gained carrier May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.826 [INFO][4844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0 calico-apiserver-6fcb869445- calico-apiserver 9319b5b9-b72a-46d4-bde6-100d6b9745f2 852 0 2025-05-27 03:16:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fcb869445 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fcb869445-7f5xw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e68996c8c5 [] [] }} ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-7f5xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.826 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-7f5xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.857 [INFO][4879] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" HandleID="k8s-pod-network.f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Workload="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.857 [INFO][4879] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" HandleID="k8s-pod-network.f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Workload="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042e0e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fcb869445-7f5xw", "timestamp":"2025-05-27 03:17:03.857747866 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.858 [INFO][4879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.858 [INFO][4879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.858 [INFO][4879] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.864 [INFO][4879] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" host="localhost" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.871 [INFO][4879] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.875 [INFO][4879] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.877 [INFO][4879] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.881 [INFO][4879] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.882 [INFO][4879] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" host="localhost" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.884 [INFO][4879] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03 May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.889 [INFO][4879] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" host="localhost" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.899 [INFO][4879] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" host="localhost" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.899 [INFO][4879] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" host="localhost" May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.899 [INFO][4879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:03.931641 containerd[1578]: 2025-05-27 03:17:03.899 [INFO][4879] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" HandleID="k8s-pod-network.f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Workload="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0" May 27 03:17:03.933409 containerd[1578]: 2025-05-27 03:17:03.904 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-7f5xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0", GenerateName:"calico-apiserver-6fcb869445-", Namespace:"calico-apiserver", SelfLink:"", UID:"9319b5b9-b72a-46d4-bde6-100d6b9745f2", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb869445", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fcb869445-7f5xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e68996c8c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:03.933409 containerd[1578]: 2025-05-27 03:17:03.904 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-7f5xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0" May 27 03:17:03.933409 containerd[1578]: 2025-05-27 03:17:03.904 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e68996c8c5 ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-7f5xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0" May 27 03:17:03.933409 containerd[1578]: 2025-05-27 03:17:03.911 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-7f5xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0" May 27 03:17:03.933409 containerd[1578]: 2025-05-27 03:17:03.914 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-7f5xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0", GenerateName:"calico-apiserver-6fcb869445-", Namespace:"calico-apiserver", SelfLink:"", UID:"9319b5b9-b72a-46d4-bde6-100d6b9745f2", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcb869445", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03", Pod:"calico-apiserver-6fcb869445-7f5xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e68996c8c5", MAC:"52:2b:83:46:dc:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:03.933409 containerd[1578]: 2025-05-27 03:17:03.924 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" Namespace="calico-apiserver" Pod="calico-apiserver-6fcb869445-7f5xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcb869445--7f5xw-eth0" May 27 03:17:03.969579 containerd[1578]: time="2025-05-27T03:17:03.968793236Z" level=info msg="connecting to shim f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03" address="unix:///run/containerd/s/a164428a5c4436e4ecb6cc374a091d2c19dd235c7adc825b74418cd1e1e7953a" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:03.976525 kubelet[2688]: I0527 03:17:03.976463 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84868df674-nwzjl" podStartSLOduration=24.365136663 podStartE2EDuration="27.976443414s" podCreationTimestamp="2025-05-27 03:16:36 +0000 UTC" firstStartedPulling="2025-05-27 03:17:00.128401158 +0000 UTC m=+43.436206423" lastFinishedPulling="2025-05-27 03:17:03.739707898 +0000 UTC m=+47.047513174" observedRunningTime="2025-05-27 03:17:03.976197944 +0000 UTC m=+47.284003209" watchObservedRunningTime="2025-05-27 03:17:03.976443414 +0000 UTC m=+47.284248679" May 27 03:17:04.016057 systemd[1]: Started cri-containerd-f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03.scope - libcontainer container f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03. May 27 03:17:04.048309 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:04.078525 containerd[1578]: time="2025-05-27T03:17:04.078470762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2e1448d3915c83509078ff6122cf8ef670fd467e2db59b03157214865172b02\" id:\"5e43ca54aaa202612dad3343a76c4ac78dd510919eb370e609ea8a568d8e8ac6\" pid:4972 exited_at:{seconds:1748315824 nanos:78049391}" May 27 03:17:04.107849 containerd[1578]: time="2025-05-27T03:17:04.107798034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcb869445-7f5xw,Uid:9319b5b9-b72a-46d4-bde6-100d6b9745f2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03\"" May 27 03:17:04.856743 systemd-networkd[1493]: calibec6e7e7a73: Gained IPv6LL May 27 03:17:05.112830 systemd-networkd[1493]: cali76a12713bd1: Gained IPv6LL May 27 03:17:05.176759 systemd-networkd[1493]: cali4e68996c8c5: Gained IPv6LL May 27 03:17:05.884684 containerd[1578]: time="2025-05-27T03:17:05.884637876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:05.885581 containerd[1578]: time="2025-05-27T03:17:05.885519409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:17:05.886791 containerd[1578]: time="2025-05-27T03:17:05.886763845Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:05.888697 containerd[1578]: time="2025-05-27T03:17:05.888669482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:05.889239 containerd[1578]: time="2025-05-27T03:17:05.889182143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.148371425s" May 27 03:17:05.889271 containerd[1578]: time="2025-05-27T03:17:05.889240323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:17:05.890158 containerd[1578]: time="2025-05-27T03:17:05.890101679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:17:05.893616 containerd[1578]: time="2025-05-27T03:17:05.893582893Z" level=info msg="CreateContainer within sandbox \"9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:17:05.902608 containerd[1578]: time="2025-05-27T03:17:05.902580048Z" level=info msg="Container 7ad6e861d2a292cb2f1b9f5a4e0c957ca31b62076ef47bc637e6536139b169f9: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:05.912135 containerd[1578]: time="2025-05-27T03:17:05.912090387Z" level=info msg="CreateContainer within sandbox \"9c8aee06a176d1eaacb0cac9e200a17647291daadf1ccbd17078982055296e8e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7ad6e861d2a292cb2f1b9f5a4e0c957ca31b62076ef47bc637e6536139b169f9\"" May 27 03:17:05.912525 containerd[1578]: time="2025-05-27T03:17:05.912500245Z" level=info msg="StartContainer for \"7ad6e861d2a292cb2f1b9f5a4e0c957ca31b62076ef47bc637e6536139b169f9\"" May 27 03:17:05.913850 containerd[1578]: time="2025-05-27T03:17:05.913824211Z" level=info msg="connecting to shim 7ad6e861d2a292cb2f1b9f5a4e0c957ca31b62076ef47bc637e6536139b169f9" address="unix:///run/containerd/s/6275657fced72b90715e6ae1fa21a7a0eea099d53c0d0cb0932db52eb74cdb22" protocol=ttrpc version=3 May 27 03:17:05.936792 systemd[1]: Started cri-containerd-7ad6e861d2a292cb2f1b9f5a4e0c957ca31b62076ef47bc637e6536139b169f9.scope - libcontainer container 7ad6e861d2a292cb2f1b9f5a4e0c957ca31b62076ef47bc637e6536139b169f9. May 27 03:17:06.138728 containerd[1578]: time="2025-05-27T03:17:06.138579669Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:06.302760 containerd[1578]: time="2025-05-27T03:17:06.302600334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:06.302760 containerd[1578]: time="2025-05-27T03:17:06.302668953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:17:06.303569 kubelet[2688]: E0527 03:17:06.303481 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:17:06.304122 kubelet[2688]: E0527 03:17:06.303606 2688 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:17:06.304122 kubelet[2688]: E0527 03:17:06.303930 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glzqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-qbl7d_calico-system(9ff6044c-a1cf-41a0-9830-a2555119f01d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:06.304254 containerd[1578]: time="2025-05-27T03:17:06.304137669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:17:06.304611 containerd[1578]: time="2025-05-27T03:17:06.304576914Z" level=info msg="StartContainer for \"7ad6e861d2a292cb2f1b9f5a4e0c957ca31b62076ef47bc637e6536139b169f9\" returns successfully" May 27 03:17:06.305107 kubelet[2688]: E0527 03:17:06.305070 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-qbl7d" podUID="9ff6044c-a1cf-41a0-9830-a2555119f01d" May 27 03:17:06.657500 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:36352.service - OpenSSH per-connection server daemon (10.0.0.1:36352). May 27 03:17:06.724588 sshd[5041]: Accepted publickey for core from 10.0.0.1 port 36352 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:06.726685 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:06.731934 systemd-logind[1565]: New session 10 of user core. May 27 03:17:06.746742 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:17:06.855983 kubelet[2688]: I0527 03:17:06.855938 2688 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:17:06.876381 kubelet[2688]: I0527 03:17:06.876124 2688 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:17:07.020921 sshd[5043]: Connection closed by 10.0.0.1 port 36352 May 27 03:17:07.020812 sshd-session[5041]: pam_unix(sshd:session): session closed for user core May 27 03:17:06.969093 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:36366.service - OpenSSH per-connection server daemon (10.0.0.1:36366). May 27 03:17:07.028118 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:36352.service: Deactivated successfully. May 27 03:17:07.028459 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. May 27 03:17:07.028827 kubelet[2688]: E0527 03:17:07.028788 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-qbl7d" podUID="9ff6044c-a1cf-41a0-9830-a2555119f01d" May 27 03:17:07.031878 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:17:07.035970 systemd-logind[1565]: Removed session 10. May 27 03:17:07.037962 sshd[5055]: Accepted publickey for core from 10.0.0.1 port 36366 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:07.039779 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:07.045012 systemd-logind[1565]: New session 11 of user core. May 27 03:17:07.053697 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:17:07.217783 kubelet[2688]: I0527 03:17:07.217701 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9ccgt" podStartSLOduration=25.328155897 podStartE2EDuration="31.217677105s" podCreationTimestamp="2025-05-27 03:16:36 +0000 UTC" firstStartedPulling="2025-05-27 03:17:00.000370077 +0000 UTC m=+43.308175342" lastFinishedPulling="2025-05-27 03:17:05.889891285 +0000 UTC m=+49.197696550" observedRunningTime="2025-05-27 03:17:07.211726337 +0000 UTC m=+50.519531602" watchObservedRunningTime="2025-05-27 03:17:07.217677105 +0000 UTC m=+50.525482360" May 27 03:17:07.363485 sshd[5062]: Connection closed by 10.0.0.1 port 36366 May 27 03:17:07.364881 sshd-session[5055]: pam_unix(sshd:session): session closed for user core May 27 03:17:07.376924 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:36366.service: Deactivated successfully. May 27 03:17:07.379918 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:17:07.381245 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. May 27 03:17:07.385419 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:36368.service - OpenSSH per-connection server daemon (10.0.0.1:36368). May 27 03:17:07.387317 systemd-logind[1565]: Removed session 11. May 27 03:17:07.444120 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 36368 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:07.445675 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:07.450719 systemd-logind[1565]: New session 12 of user core. May 27 03:17:07.459732 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:17:07.588319 sshd[5077]: Connection closed by 10.0.0.1 port 36368 May 27 03:17:07.588654 sshd-session[5075]: pam_unix(sshd:session): session closed for user core May 27 03:17:07.593839 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:36368.service: Deactivated successfully. May 27 03:17:07.596178 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:17:07.597169 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. May 27 03:17:07.598622 systemd-logind[1565]: Removed session 12. May 27 03:17:09.027453 containerd[1578]: time="2025-05-27T03:17:09.027378803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:09.028120 containerd[1578]: time="2025-05-27T03:17:09.028099445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:17:09.029374 containerd[1578]: time="2025-05-27T03:17:09.029325726Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:09.031145 containerd[1578]: time="2025-05-27T03:17:09.031109023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:09.031680 containerd[1578]: time="2025-05-27T03:17:09.031646511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 2.727477314s" May 27 03:17:09.031680 containerd[1578]: time="2025-05-27T03:17:09.031672460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:17:09.032339 containerd[1578]: time="2025-05-27T03:17:09.032307251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:17:09.036733 containerd[1578]: time="2025-05-27T03:17:09.036694353Z" level=info msg="CreateContainer within sandbox \"41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:17:09.044712 containerd[1578]: time="2025-05-27T03:17:09.044665151Z" level=info msg="Container 16c7edce2d5e5a52a2f7b18416500a8670ec00987ef71661e7f38166150a37a3: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:09.054100 containerd[1578]: time="2025-05-27T03:17:09.054051214Z" level=info msg="CreateContainer within sandbox \"41e1886d3f4989c9891b5973c4355341ab1515b4709532e0822e2012bdb126ab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"16c7edce2d5e5a52a2f7b18416500a8670ec00987ef71661e7f38166150a37a3\"" May 27 03:17:09.054743 containerd[1578]: time="2025-05-27T03:17:09.054714920Z" level=info msg="StartContainer for \"16c7edce2d5e5a52a2f7b18416500a8670ec00987ef71661e7f38166150a37a3\"" May 27 03:17:09.056042 containerd[1578]: time="2025-05-27T03:17:09.055879675Z" level=info msg="connecting to shim 16c7edce2d5e5a52a2f7b18416500a8670ec00987ef71661e7f38166150a37a3" address="unix:///run/containerd/s/8fc040d1ef8942f1849f340a7ce055ae999656184d7e1145e34e8d4539b64ca5" protocol=ttrpc version=3 May 27 03:17:09.088816 systemd[1]: Started cri-containerd-16c7edce2d5e5a52a2f7b18416500a8670ec00987ef71661e7f38166150a37a3.scope - libcontainer container 16c7edce2d5e5a52a2f7b18416500a8670ec00987ef71661e7f38166150a37a3. May 27 03:17:09.143893 containerd[1578]: time="2025-05-27T03:17:09.143852487Z" level=info msg="StartContainer for \"16c7edce2d5e5a52a2f7b18416500a8670ec00987ef71661e7f38166150a37a3\" returns successfully" May 27 03:17:09.430835 containerd[1578]: time="2025-05-27T03:17:09.430676164Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:09.431726 containerd[1578]: time="2025-05-27T03:17:09.431692622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 03:17:09.433385 containerd[1578]: time="2025-05-27T03:17:09.433346605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 401.012904ms" May 27 03:17:09.433459 containerd[1578]: time="2025-05-27T03:17:09.433388393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:17:09.439522 containerd[1578]: time="2025-05-27T03:17:09.439482138Z" level=info msg="CreateContainer within sandbox \"f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:17:09.451144 containerd[1578]: time="2025-05-27T03:17:09.450735105Z" level=info msg="Container 562a471da007b25c7439a1159b4692d0cd33dbe0f82345ed5b8746cbaefd2fb6: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:09.460473 containerd[1578]: time="2025-05-27T03:17:09.460421521Z" level=info msg="CreateContainer within sandbox \"f03524912d4e42b639f84b4aa026c06fde2bfe3c5f483938c76aca02cb661b03\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"562a471da007b25c7439a1159b4692d0cd33dbe0f82345ed5b8746cbaefd2fb6\"" May 27 03:17:09.461051 containerd[1578]: time="2025-05-27T03:17:09.461029542Z" level=info msg="StartContainer for \"562a471da007b25c7439a1159b4692d0cd33dbe0f82345ed5b8746cbaefd2fb6\"" May 27 03:17:09.463170 containerd[1578]: time="2025-05-27T03:17:09.463111299Z" level=info msg="connecting to shim 562a471da007b25c7439a1159b4692d0cd33dbe0f82345ed5b8746cbaefd2fb6" address="unix:///run/containerd/s/a164428a5c4436e4ecb6cc374a091d2c19dd235c7adc825b74418cd1e1e7953a" protocol=ttrpc version=3 May 27 03:17:09.490700 systemd[1]: Started cri-containerd-562a471da007b25c7439a1159b4692d0cd33dbe0f82345ed5b8746cbaefd2fb6.scope - libcontainer container 562a471da007b25c7439a1159b4692d0cd33dbe0f82345ed5b8746cbaefd2fb6. May 27 03:17:09.557968 containerd[1578]: time="2025-05-27T03:17:09.557915069Z" level=info msg="StartContainer for \"562a471da007b25c7439a1159b4692d0cd33dbe0f82345ed5b8746cbaefd2fb6\" returns successfully" May 27 03:17:10.212893 kubelet[2688]: I0527 03:17:10.212820 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fcb869445-7f5xw" podStartSLOduration=32.887626838 podStartE2EDuration="38.21280267s" podCreationTimestamp="2025-05-27 03:16:32 +0000 UTC" firstStartedPulling="2025-05-27 03:17:04.109103253 +0000 UTC m=+47.416908518" lastFinishedPulling="2025-05-27 03:17:09.434279085 +0000 UTC m=+52.742084350" observedRunningTime="2025-05-27 03:17:10.212346634 +0000 UTC m=+53.520151899" watchObservedRunningTime="2025-05-27 03:17:10.21280267 +0000 UTC m=+53.520607935" May 27 03:17:10.828409 kubelet[2688]: I0527 03:17:10.828339 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fcb869445-ns9tb" podStartSLOduration=33.626129728 podStartE2EDuration="38.828315325s" podCreationTimestamp="2025-05-27 03:16:32 +0000 UTC" firstStartedPulling="2025-05-27 03:17:03.830033289 +0000 UTC m=+47.137838554" lastFinishedPulling="2025-05-27 03:17:09.032218876 +0000 UTC m=+52.340024151" observedRunningTime="2025-05-27 03:17:10.300594127 +0000 UTC m=+53.608399392" watchObservedRunningTime="2025-05-27 03:17:10.828315325 +0000 UTC m=+54.136120590" May 27 03:17:11.039774 kubelet[2688]: I0527 03:17:11.039741 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:17:12.608446 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:36376.service - OpenSSH per-connection server daemon (10.0.0.1:36376). May 27 03:17:12.679734 sshd[5184]: Accepted publickey for core from 10.0.0.1 port 36376 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:12.681656 sshd-session[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:12.686483 systemd-logind[1565]: New session 13 of user core. May 27 03:17:12.696708 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:17:12.834306 sshd[5186]: Connection closed by 10.0.0.1 port 36376 May 27 03:17:12.834701 sshd-session[5184]: pam_unix(sshd:session): session closed for user core May 27 03:17:12.839756 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:36376.service: Deactivated successfully. May 27 03:17:12.842172 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:17:12.843350 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. May 27 03:17:12.844902 systemd-logind[1565]: Removed session 13. May 27 03:17:13.783629 containerd[1578]: time="2025-05-27T03:17:13.783465532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:17:14.159081 containerd[1578]: time="2025-05-27T03:17:14.158931734Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:14.198055 containerd[1578]: time="2025-05-27T03:17:14.197989928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:14.198206 containerd[1578]: time="2025-05-27T03:17:14.198065680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:17:14.198306 kubelet[2688]: E0527 03:17:14.198252 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:17:14.198722 kubelet[2688]: E0527 03:17:14.198315 2688 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:17:14.198722 kubelet[2688]: E0527 03:17:14.198463 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6eb246ae1e7a46c38a021552b652d8fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lcjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf6dc74bf-gpwx9_calico-system(1bffbfb7-7888-4b4e-9c96-fa797074a5c4): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:14.200201 containerd[1578]: time="2025-05-27T03:17:14.200176351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:17:14.642331 containerd[1578]: time="2025-05-27T03:17:14.642252523Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:14.643828 containerd[1578]: time="2025-05-27T03:17:14.643668430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:14.643828 containerd[1578]: time="2025-05-27T03:17:14.643795228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:17:14.644112 kubelet[2688]: E0527 03:17:14.644028 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:17:14.644217 kubelet[2688]: E0527 03:17:14.644108 2688 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:17:14.644290 kubelet[2688]: E0527 03:17:14.644246 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf6dc74bf-gpwx9_calico-system(1bffbfb7-7888-4b4e-9c96-fa797074a5c4): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:14.645445 kubelet[2688]: E0527 03:17:14.645402 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-bf6dc74bf-gpwx9" podUID="1bffbfb7-7888-4b4e-9c96-fa797074a5c4" May 27 03:17:17.847967 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:58792.service - OpenSSH per-connection server daemon (10.0.0.1:58792). May 27 03:17:17.898020 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 58792 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:17.899337 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:17.904072 systemd-logind[1565]: New session 14 of user core. May 27 03:17:17.914679 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:17:18.036873 sshd[5216]: Connection closed by 10.0.0.1 port 58792 May 27 03:17:18.037234 sshd-session[5214]: pam_unix(sshd:session): session closed for user core May 27 03:17:18.041396 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:58792.service: Deactivated successfully. May 27 03:17:18.043667 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:17:18.044462 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. May 27 03:17:18.046191 systemd-logind[1565]: Removed session 14. May 27 03:17:20.784408 containerd[1578]: time="2025-05-27T03:17:20.784259804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:17:21.050501 containerd[1578]: time="2025-05-27T03:17:21.050321901Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:21.068478 containerd[1578]: time="2025-05-27T03:17:21.068416838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:17:21.068478 containerd[1578]: time="2025-05-27T03:17:21.068463918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:21.068731 kubelet[2688]: E0527 03:17:21.068687 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:17:21.069169 kubelet[2688]: E0527 03:17:21.068735 2688 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:17:21.069169 kubelet[2688]: E0527 03:17:21.068878 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glzqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-qbl7d_calico-system(9ff6044c-a1cf-41a0-9830-a2555119f01d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:21.070092 kubelet[2688]: E0527 03:17:21.070046 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-qbl7d" podUID="9ff6044c-a1cf-41a0-9830-a2555119f01d" May 27 03:17:23.051736 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:58800.service - OpenSSH per-connection server daemon (10.0.0.1:58800). May 27 03:17:23.111489 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 58800 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:23.113094 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:23.118007 systemd-logind[1565]: New session 15 of user core. May 27 03:17:23.125716 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:17:23.255148 sshd[5237]: Connection closed by 10.0.0.1 port 58800 May 27 03:17:23.255491 sshd-session[5235]: pam_unix(sshd:session): session closed for user core May 27 03:17:23.261288 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:58800.service: Deactivated successfully. May 27 03:17:23.264029 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:17:23.264931 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. May 27 03:17:23.266582 systemd-logind[1565]: Removed session 15. May 27 03:17:25.784295 kubelet[2688]: E0527 03:17:25.784195 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-bf6dc74bf-gpwx9" podUID="1bffbfb7-7888-4b4e-9c96-fa797074a5c4" May 27 03:17:27.682004 containerd[1578]: time="2025-05-27T03:17:27.681948831Z" level=info msg="TaskExit event in podsandbox handler container_id:\"387286f37b512239ed6bff242c002f8ea5f77d5f558c7d24cba3c11c19059e85\" id:\"345e12d0c8fcdb4c3f44e108ed35f596663a701fa2a240ec715b047819df8466\" pid:5264 exited_at:{seconds:1748315847 nanos:681459438}" May 27 03:17:28.279715 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:51148.service - OpenSSH per-connection server daemon (10.0.0.1:51148). May 27 03:17:28.339029 sshd[5278]: Accepted publickey for core from 10.0.0.1 port 51148 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:28.340639 sshd-session[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:28.345368 systemd-logind[1565]: New session 16 of user core. May 27 03:17:28.355674 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:17:28.487330 sshd[5280]: Connection closed by 10.0.0.1 port 51148 May 27 03:17:28.487882 sshd-session[5278]: pam_unix(sshd:session): session closed for user core May 27 03:17:28.499474 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:51148.service: Deactivated successfully. May 27 03:17:28.501752 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:17:28.502651 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. May 27 03:17:28.507294 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:51156.service - OpenSSH per-connection server daemon (10.0.0.1:51156). May 27 03:17:28.508636 systemd-logind[1565]: Removed session 16. May 27 03:17:28.554535 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 51156 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:28.556516 sshd-session[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:28.562329 systemd-logind[1565]: New session 17 of user core. May 27 03:17:28.576735 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:17:28.857648 sshd[5296]: Connection closed by 10.0.0.1 port 51156 May 27 03:17:28.859047 sshd-session[5294]: pam_unix(sshd:session): session closed for user core May 27 03:17:28.872615 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:51156.service: Deactivated successfully. May 27 03:17:28.875268 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:17:28.876171 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. May 27 03:17:28.880935 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:51164.service - OpenSSH per-connection server daemon (10.0.0.1:51164). May 27 03:17:28.881953 systemd-logind[1565]: Removed session 17. May 27 03:17:28.943508 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 51164 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:28.945156 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:28.949970 systemd-logind[1565]: New session 18 of user core. May 27 03:17:28.960684 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:17:29.715927 sshd[5309]: Connection closed by 10.0.0.1 port 51164 May 27 03:17:29.716917 sshd-session[5307]: pam_unix(sshd:session): session closed for user core May 27 03:17:29.730317 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:51164.service: Deactivated successfully. May 27 03:17:29.732949 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:17:29.734058 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. May 27 03:17:29.737495 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:51180.service - OpenSSH per-connection server daemon (10.0.0.1:51180). May 27 03:17:29.740093 systemd-logind[1565]: Removed session 18. May 27 03:17:29.783633 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 51180 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:29.784933 sshd-session[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:29.789417 systemd-logind[1565]: New session 19 of user core. May 27 03:17:29.797671 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:17:30.264062 sshd[5331]: Connection closed by 10.0.0.1 port 51180 May 27 03:17:30.264381 sshd-session[5329]: pam_unix(sshd:session): session closed for user core May 27 03:17:30.273516 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:51180.service: Deactivated successfully. May 27 03:17:30.275685 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:17:30.276515 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. May 27 03:17:30.280025 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:51192.service - OpenSSH per-connection server daemon (10.0.0.1:51192). May 27 03:17:30.280983 systemd-logind[1565]: Removed session 19. May 27 03:17:30.333262 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 51192 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:30.334956 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:30.342686 systemd-logind[1565]: New session 20 of user core. May 27 03:17:30.348733 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:17:30.517271 sshd[5345]: Connection closed by 10.0.0.1 port 51192 May 27 03:17:30.517523 sshd-session[5343]: pam_unix(sshd:session): session closed for user core May 27 03:17:30.522401 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:51192.service: Deactivated successfully. May 27 03:17:30.524792 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:17:30.525558 systemd-logind[1565]: Session 20 logged out. Waiting for processes to exit. May 27 03:17:30.527454 systemd-logind[1565]: Removed session 20. May 27 03:17:31.193973 kubelet[2688]: I0527 03:17:31.193910 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:17:34.007411 containerd[1578]: time="2025-05-27T03:17:34.007367148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2e1448d3915c83509078ff6122cf8ef670fd467e2db59b03157214865172b02\" id:\"b5c96b512629d572999177806b809f3d4b13435d9160f643e1cd5ee5b58aa0ba\" pid:5371 exited_at:{seconds:1748315854 nanos:7204129}" May 27 03:17:34.786978 kubelet[2688]: E0527 03:17:34.786922 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-qbl7d" podUID="9ff6044c-a1cf-41a0-9830-a2555119f01d" May 27 03:17:35.531015 systemd[1]: Started sshd@20-10.0.0.73:22-10.0.0.1:56358.service - OpenSSH per-connection server daemon (10.0.0.1:56358). May 27 03:17:35.599738 sshd[5383]: Accepted publickey for core from 10.0.0.1 port 56358 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:35.602146 sshd-session[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:35.608899 systemd-logind[1565]: New session 21 of user core. May 27 03:17:35.617767 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:17:35.751341 sshd[5385]: Connection closed by 10.0.0.1 port 56358 May 27 03:17:35.751674 sshd-session[5383]: pam_unix(sshd:session): session closed for user core May 27 03:17:35.756227 systemd[1]: sshd@20-10.0.0.73:22-10.0.0.1:56358.service: Deactivated successfully. May 27 03:17:35.758436 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:17:35.759441 systemd-logind[1565]: Session 21 logged out. Waiting for processes to exit. May 27 03:17:35.760857 systemd-logind[1565]: Removed session 21. May 27 03:17:39.783476 containerd[1578]: time="2025-05-27T03:17:39.783141509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:17:40.074732 containerd[1578]: time="2025-05-27T03:17:40.074565910Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:40.083886 containerd[1578]: time="2025-05-27T03:17:40.083834837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:40.084002 containerd[1578]: time="2025-05-27T03:17:40.083951840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:17:40.084193 kubelet[2688]: E0527 03:17:40.084141 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:17:40.084619 kubelet[2688]: E0527 03:17:40.084211 2688 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:17:40.084619 kubelet[2688]: E0527 03:17:40.084369 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6eb246ae1e7a46c38a021552b652d8fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lcjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf6dc74bf-gpwx9_calico-system(1bffbfb7-7888-4b4e-9c96-fa797074a5c4): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:40.086733 containerd[1578]: time="2025-05-27T03:17:40.086691560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:17:40.405939 containerd[1578]: time="2025-05-27T03:17:40.405443074Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:40.408189 containerd[1578]: time="2025-05-27T03:17:40.407413525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:40.408189 containerd[1578]: time="2025-05-27T03:17:40.407507043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:17:40.409745 kubelet[2688]: E0527 03:17:40.409684 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:17:40.409847 kubelet[2688]: E0527 03:17:40.409750 2688 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:17:40.409938 kubelet[2688]: E0527 03:17:40.409888 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf6dc74bf-gpwx9_calico-system(1bffbfb7-7888-4b4e-9c96-fa797074a5c4): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:40.411415 kubelet[2688]: E0527 03:17:40.411337 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-bf6dc74bf-gpwx9" podUID="1bffbfb7-7888-4b4e-9c96-fa797074a5c4" May 27 03:17:40.770807 systemd[1]: Started sshd@21-10.0.0.73:22-10.0.0.1:56366.service - OpenSSH per-connection server daemon (10.0.0.1:56366). May 27 03:17:40.828463 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 56366 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:40.829950 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:40.834618 systemd-logind[1565]: New session 22 of user core. May 27 03:17:40.847696 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:17:40.960250 sshd[5408]: Connection closed by 10.0.0.1 port 56366 May 27 03:17:40.960614 sshd-session[5406]: pam_unix(sshd:session): session closed for user core May 27 03:17:40.966046 systemd[1]: sshd@21-10.0.0.73:22-10.0.0.1:56366.service: Deactivated successfully. May 27 03:17:40.968473 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:17:40.969462 systemd-logind[1565]: Session 22 logged out. Waiting for processes to exit. May 27 03:17:40.970951 systemd-logind[1565]: Removed session 22. May 27 03:17:41.069899 containerd[1578]: time="2025-05-27T03:17:41.069780057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2e1448d3915c83509078ff6122cf8ef670fd467e2db59b03157214865172b02\" id:\"dc0c0daac5a414eee97ddd20b08166ceb668a1f072a02a297f2ac52e02bb3e48\" pid:5433 exited_at:{seconds:1748315861 nanos:69611848}" May 27 03:17:45.973616 systemd[1]: Started sshd@22-10.0.0.73:22-10.0.0.1:38548.service - OpenSSH per-connection server daemon (10.0.0.1:38548). May 27 03:17:46.033844 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 38548 ssh2: RSA SHA256:HN4XbSDsEGXKsR+b1h6s4C/jYnLvBqmszTOEbPjODIg May 27 03:17:46.035649 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:46.040235 systemd-logind[1565]: New session 23 of user core. May 27 03:17:46.046724 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:17:46.191201 sshd[5449]: Connection closed by 10.0.0.1 port 38548 May 27 03:17:46.192342 sshd-session[5447]: pam_unix(sshd:session): session closed for user core May 27 03:17:46.198036 systemd[1]: sshd@22-10.0.0.73:22-10.0.0.1:38548.service: Deactivated successfully. May 27 03:17:46.200476 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:17:46.203602 systemd-logind[1565]: Session 23 logged out. Waiting for processes to exit. May 27 03:17:46.204888 systemd-logind[1565]: Removed session 23. May 27 03:17:47.785098 containerd[1578]: time="2025-05-27T03:17:47.784836602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:17:48.001895 containerd[1578]: time="2025-05-27T03:17:48.001840298Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:48.003178 containerd[1578]: time="2025-05-27T03:17:48.003025784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:48.003352 containerd[1578]: time="2025-05-27T03:17:48.003124661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:17:48.004826 kubelet[2688]: E0527 03:17:48.004762 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:17:48.005232 kubelet[2688]: E0527 03:17:48.004834 2688 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:17:48.007520 kubelet[2688]: E0527 03:17:48.007451 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glzqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-qbl7d_calico-system(9ff6044c-a1cf-41a0-9830-a2555119f01d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:48.008627 kubelet[2688]: E0527 03:17:48.008594 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-qbl7d" podUID="9ff6044c-a1cf-41a0-9830-a2555119f01d"