May 27 18:30:28.007887 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 18:30:28.007932 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:30:28.007943 kernel: BIOS-provided physical RAM map: May 27 18:30:28.007950 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 18:30:28.007956 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 18:30:28.007994 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 18:30:28.008002 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 27 18:30:28.008013 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 27 18:30:28.008024 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 18:30:28.008031 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 18:30:28.008038 kernel: NX (Execute Disable) protection: active May 27 18:30:28.008045 kernel: APIC: Static calls initialized May 27 18:30:28.008052 kernel: SMBIOS 2.8 present. May 27 18:30:28.008059 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 27 18:30:28.008071 kernel: DMI: Memory slots populated: 1/1 May 27 18:30:28.008079 kernel: Hypervisor detected: KVM May 27 18:30:28.008090 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 18:30:28.008098 kernel: kvm-clock: using sched offset of 4869634802 cycles May 27 18:30:28.008107 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 18:30:28.008115 kernel: tsc: Detected 2494.140 MHz processor May 27 18:30:28.008123 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 18:30:28.008131 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 18:30:28.008139 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 27 18:30:28.008151 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 18:30:28.008160 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 18:30:28.008167 kernel: ACPI: Early table checksum verification disabled May 27 18:30:28.008175 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 27 18:30:28.008184 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:30:28.008192 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:30:28.008200 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:30:28.008208 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 27 18:30:28.008215 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:30:28.008227 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:30:28.008234 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:30:28.008245 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:30:28.008256 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 27 18:30:28.008268 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 27 18:30:28.008279 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 27 18:30:28.008289 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 27 18:30:28.008300 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 27 18:30:28.008322 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 27 18:30:28.008335 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 27 18:30:28.008346 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 27 18:30:28.008358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 27 18:30:28.008370 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 27 18:30:28.008385 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 27 18:30:28.008398 kernel: Zone ranges: May 27 18:30:28.008410 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 18:30:28.008423 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 27 18:30:28.008435 kernel: Normal empty May 27 18:30:28.008447 kernel: Device empty May 27 18:30:28.008459 kernel: Movable zone start for each node May 27 18:30:28.008473 kernel: Early memory node ranges May 27 18:30:28.008486 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 18:30:28.008498 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 27 18:30:28.008516 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 27 18:30:28.008528 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 18:30:28.008536 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 18:30:28.008545 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 27 18:30:28.008554 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 18:30:28.008562 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 18:30:28.008576 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 18:30:28.008585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 18:30:28.008596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 18:30:28.008609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 18:30:28.008620 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 18:30:28.008629 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 18:30:28.008637 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 18:30:28.008646 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 18:30:28.008654 kernel: TSC deadline timer available May 27 18:30:28.008662 kernel: CPU topo: Max. logical packages: 1 May 27 18:30:28.008671 kernel: CPU topo: Max. logical dies: 1 May 27 18:30:28.008680 kernel: CPU topo: Max. dies per package: 1 May 27 18:30:28.008692 kernel: CPU topo: Max. threads per core: 1 May 27 18:30:28.008700 kernel: CPU topo: Num. cores per package: 2 May 27 18:30:28.008708 kernel: CPU topo: Num. threads per package: 2 May 27 18:30:28.008716 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 18:30:28.008725 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 18:30:28.008733 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 27 18:30:28.008741 kernel: Booting paravirtualized kernel on KVM May 27 18:30:28.008766 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 18:30:28.008774 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 18:30:28.008783 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 18:30:28.008795 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 18:30:28.008803 kernel: pcpu-alloc: [0] 0 1 May 27 18:30:28.008811 kernel: kvm-guest: PV spinlocks disabled, no host support May 27 18:30:28.008822 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:30:28.008831 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 18:30:28.008840 kernel: random: crng init done May 27 18:30:28.008848 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 18:30:28.008857 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 18:30:28.008869 kernel: Fallback order for Node 0: 0 May 27 18:30:28.008877 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 27 18:30:28.008885 kernel: Policy zone: DMA32 May 27 18:30:28.008893 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 18:30:28.008902 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 18:30:28.008910 kernel: Kernel/User page tables isolation: enabled May 27 18:30:28.008918 kernel: ftrace: allocating 40081 entries in 157 pages May 27 18:30:28.008927 kernel: ftrace: allocated 157 pages with 5 groups May 27 18:30:28.008935 kernel: Dynamic Preempt: voluntary May 27 18:30:28.008947 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 18:30:28.008963 kernel: rcu: RCU event tracing is enabled. May 27 18:30:28.009925 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 18:30:28.009953 kernel: Trampoline variant of Tasks RCU enabled. May 27 18:30:28.011012 kernel: Rude variant of Tasks RCU enabled. May 27 18:30:28.011029 kernel: Tracing variant of Tasks RCU enabled. May 27 18:30:28.011039 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 18:30:28.011049 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 18:30:28.011058 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:30:28.011078 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:30:28.011088 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:30:28.011096 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 18:30:28.011105 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 18:30:28.011114 kernel: Console: colour VGA+ 80x25 May 27 18:30:28.011123 kernel: printk: legacy console [tty0] enabled May 27 18:30:28.011131 kernel: printk: legacy console [ttyS0] enabled May 27 18:30:28.011140 kernel: ACPI: Core revision 20240827 May 27 18:30:28.011149 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 18:30:28.011171 kernel: APIC: Switch to symmetric I/O mode setup May 27 18:30:28.011180 kernel: x2apic enabled May 27 18:30:28.011190 kernel: APIC: Switched APIC routing to: physical x2apic May 27 18:30:28.011202 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 18:30:28.011214 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 27 18:30:28.011223 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) May 27 18:30:28.011233 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 27 18:30:28.011242 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 27 18:30:28.011251 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 18:30:28.011264 kernel: Spectre V2 : Mitigation: Retpolines May 27 18:30:28.011273 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 18:30:28.011282 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 27 18:30:28.011291 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 18:30:28.011300 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 18:30:28.011309 kernel: MDS: Mitigation: Clear CPU buffers May 27 18:30:28.011318 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 27 18:30:28.011331 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 18:30:28.011340 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 18:30:28.011349 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 18:30:28.011358 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 18:30:28.011367 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 18:30:28.011376 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 27 18:30:28.011385 kernel: Freeing SMP alternatives memory: 32K May 27 18:30:28.011394 kernel: pid_max: default: 32768 minimum: 301 May 27 18:30:28.011403 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 18:30:28.011415 kernel: landlock: Up and running. May 27 18:30:28.011424 kernel: SELinux: Initializing. May 27 18:30:28.011433 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 18:30:28.011442 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 18:30:28.011452 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 27 18:30:28.011461 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 27 18:30:28.011470 kernel: signal: max sigframe size: 1776 May 27 18:30:28.011479 kernel: rcu: Hierarchical SRCU implementation. May 27 18:30:28.011488 kernel: rcu: Max phase no-delay instances is 400. May 27 18:30:28.011501 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 18:30:28.011510 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 18:30:28.011519 kernel: smp: Bringing up secondary CPUs ... May 27 18:30:28.011528 kernel: smpboot: x86: Booting SMP configuration: May 27 18:30:28.011539 kernel: .... node #0, CPUs: #1 May 27 18:30:28.011548 kernel: smp: Brought up 1 node, 2 CPUs May 27 18:30:28.011557 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) May 27 18:30:28.011567 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 125140K reserved, 0K cma-reserved) May 27 18:30:28.011576 kernel: devtmpfs: initialized May 27 18:30:28.011589 kernel: x86/mm: Memory block size: 128MB May 27 18:30:28.011599 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 18:30:28.011608 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 18:30:28.011617 kernel: pinctrl core: initialized pinctrl subsystem May 27 18:30:28.011626 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 18:30:28.011635 kernel: audit: initializing netlink subsys (disabled) May 27 18:30:28.011645 kernel: audit: type=2000 audit(1748370624.690:1): state=initialized audit_enabled=0 res=1 May 27 18:30:28.011653 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 18:30:28.011662 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 18:30:28.011675 kernel: cpuidle: using governor menu May 27 18:30:28.011684 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 18:30:28.011693 kernel: dca service started, version 1.12.1 May 27 18:30:28.011702 kernel: PCI: Using configuration type 1 for base access May 27 18:30:28.011711 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 18:30:28.011840 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 18:30:28.011850 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 18:30:28.011859 kernel: ACPI: Added _OSI(Module Device) May 27 18:30:28.011868 kernel: ACPI: Added _OSI(Processor Device) May 27 18:30:28.011882 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 18:30:28.011891 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 18:30:28.011900 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 18:30:28.011909 kernel: ACPI: Interpreter enabled May 27 18:30:28.011918 kernel: ACPI: PM: (supports S0 S5) May 27 18:30:28.011930 kernel: ACPI: Using IOAPIC for interrupt routing May 27 18:30:28.011944 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 18:30:28.011957 kernel: PCI: Using E820 reservations for host bridge windows May 27 18:30:28.013051 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 27 18:30:28.013074 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 18:30:28.013409 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 27 18:30:28.013539 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 27 18:30:28.013676 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 27 18:30:28.013690 kernel: acpiphp: Slot [3] registered May 27 18:30:28.013700 kernel: acpiphp: Slot [4] registered May 27 18:30:28.013710 kernel: acpiphp: Slot [5] registered May 27 18:30:28.013726 kernel: acpiphp: Slot [6] registered May 27 18:30:28.013735 kernel: acpiphp: Slot [7] registered May 27 18:30:28.013745 kernel: acpiphp: Slot [8] registered May 27 18:30:28.013754 kernel: acpiphp: Slot [9] registered May 27 18:30:28.013763 kernel: acpiphp: Slot [10] registered May 27 18:30:28.013772 kernel: acpiphp: Slot [11] registered May 27 18:30:28.013781 kernel: acpiphp: Slot [12] registered May 27 18:30:28.013790 kernel: acpiphp: Slot [13] registered May 27 18:30:28.013799 kernel: acpiphp: Slot [14] registered May 27 18:30:28.013808 kernel: acpiphp: Slot [15] registered May 27 18:30:28.013821 kernel: acpiphp: Slot [16] registered May 27 18:30:28.013830 kernel: acpiphp: Slot [17] registered May 27 18:30:28.013839 kernel: acpiphp: Slot [18] registered May 27 18:30:28.013848 kernel: acpiphp: Slot [19] registered May 27 18:30:28.013857 kernel: acpiphp: Slot [20] registered May 27 18:30:28.013866 kernel: acpiphp: Slot [21] registered May 27 18:30:28.013875 kernel: acpiphp: Slot [22] registered May 27 18:30:28.013884 kernel: acpiphp: Slot [23] registered May 27 18:30:28.013893 kernel: acpiphp: Slot [24] registered May 27 18:30:28.013905 kernel: acpiphp: Slot [25] registered May 27 18:30:28.013914 kernel: acpiphp: Slot [26] registered May 27 18:30:28.013923 kernel: acpiphp: Slot [27] registered May 27 18:30:28.013932 kernel: acpiphp: Slot [28] registered May 27 18:30:28.013941 kernel: acpiphp: Slot [29] registered May 27 18:30:28.013950 kernel: acpiphp: Slot [30] registered May 27 18:30:28.014994 kernel: acpiphp: Slot [31] registered May 27 18:30:28.015019 kernel: PCI host bridge to bus 0000:00 May 27 18:30:28.015211 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 18:30:28.015323 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 18:30:28.015419 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 18:30:28.015532 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 27 18:30:28.015621 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 27 18:30:28.015709 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 18:30:28.015921 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 27 18:30:28.016078 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 27 18:30:28.016220 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 27 18:30:28.016359 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 27 18:30:28.016495 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 27 18:30:28.016609 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 27 18:30:28.016737 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 27 18:30:28.016900 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 27 18:30:28.018648 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 27 18:30:28.018821 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 27 18:30:28.018991 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 27 18:30:28.019121 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 27 18:30:28.019218 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 27 18:30:28.019365 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 27 18:30:28.019478 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 27 18:30:28.019573 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 27 18:30:28.019669 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 27 18:30:28.019829 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 27 18:30:28.019939 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 18:30:28.020162 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 18:30:28.020262 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 27 18:30:28.020361 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 27 18:30:28.020512 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 27 18:30:28.020630 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 18:30:28.020728 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 27 18:30:28.020838 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 27 18:30:28.020933 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 27 18:30:28.021076 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 27 18:30:28.021180 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 27 18:30:28.021274 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 27 18:30:28.021405 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 27 18:30:28.021524 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 18:30:28.021619 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 27 18:30:28.021711 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 27 18:30:28.021812 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 27 18:30:28.021934 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 18:30:28.022058 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 27 18:30:28.022290 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 27 18:30:28.022395 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 27 18:30:28.022512 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 27 18:30:28.022612 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 27 18:30:28.022716 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 27 18:30:28.022729 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 18:30:28.022739 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 18:30:28.022750 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 18:30:28.022762 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 18:30:28.022771 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 27 18:30:28.022781 kernel: iommu: Default domain type: Translated May 27 18:30:28.022790 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 18:30:28.022804 kernel: PCI: Using ACPI for IRQ routing May 27 18:30:28.022813 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 18:30:28.022829 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 18:30:28.022839 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 27 18:30:28.022946 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 27 18:30:28.023063 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 27 18:30:28.023163 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 18:30:28.023175 kernel: vgaarb: loaded May 27 18:30:28.023185 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 18:30:28.023199 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 18:30:28.023208 kernel: clocksource: Switched to clocksource kvm-clock May 27 18:30:28.023217 kernel: VFS: Disk quotas dquot_6.6.0 May 27 18:30:28.023227 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 18:30:28.023236 kernel: pnp: PnP ACPI init May 27 18:30:28.023245 kernel: pnp: PnP ACPI: found 4 devices May 27 18:30:28.023255 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 18:30:28.023264 kernel: NET: Registered PF_INET protocol family May 27 18:30:28.023274 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 18:30:28.023292 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 27 18:30:28.023302 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 18:30:28.023311 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 18:30:28.023321 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 27 18:30:28.023330 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 27 18:30:28.023340 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 18:30:28.023349 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 18:30:28.023358 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 18:30:28.023367 kernel: NET: Registered PF_XDP protocol family May 27 18:30:28.023511 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 18:30:28.023609 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 18:30:28.023697 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 18:30:28.023840 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 27 18:30:28.023939 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 27 18:30:28.025488 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 27 18:30:28.025620 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 27 18:30:28.025642 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 27 18:30:28.025754 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 31151 usecs May 27 18:30:28.025768 kernel: PCI: CLS 0 bytes, default 64 May 27 18:30:28.025778 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 27 18:30:28.025788 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 27 18:30:28.025797 kernel: Initialise system trusted keyrings May 27 18:30:28.025807 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 27 18:30:28.025816 kernel: Key type asymmetric registered May 27 18:30:28.025825 kernel: Asymmetric key parser 'x509' registered May 27 18:30:28.025840 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 18:30:28.025849 kernel: io scheduler mq-deadline registered May 27 18:30:28.025859 kernel: io scheduler kyber registered May 27 18:30:28.025867 kernel: io scheduler bfq registered May 27 18:30:28.025877 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 18:30:28.025886 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 27 18:30:28.025895 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 27 18:30:28.025904 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 27 18:30:28.025914 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 18:30:28.025926 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 18:30:28.025936 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 18:30:28.025945 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 18:30:28.025954 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 18:30:28.026109 kernel: rtc_cmos 00:03: RTC can wake from S4 May 27 18:30:28.026253 kernel: rtc_cmos 00:03: registered as rtc0 May 27 18:30:28.026382 kernel: rtc_cmos 00:03: setting system clock to 2025-05-27T18:30:27 UTC (1748370627) May 27 18:30:28.026401 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 27 18:30:28.026536 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 27 18:30:28.026549 kernel: intel_pstate: CPU model not supported May 27 18:30:28.026559 kernel: NET: Registered PF_INET6 protocol family May 27 18:30:28.026568 kernel: Segment Routing with IPv6 May 27 18:30:28.026578 kernel: In-situ OAM (IOAM) with IPv6 May 27 18:30:28.026587 kernel: NET: Registered PF_PACKET protocol family May 27 18:30:28.026596 kernel: Key type dns_resolver registered May 27 18:30:28.026606 kernel: IPI shorthand broadcast: enabled May 27 18:30:28.026618 kernel: sched_clock: Marking stable (3608007852, 100037840)->(3725337897, -17292205) May 27 18:30:28.026634 kernel: registered taskstats version 1 May 27 18:30:28.026643 kernel: Loading compiled-in X.509 certificates May 27 18:30:28.026653 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 18:30:28.026662 kernel: Demotion targets for Node 0: null May 27 18:30:28.026672 kernel: Key type .fscrypt registered May 27 18:30:28.026681 kernel: Key type fscrypt-provisioning registered May 27 18:30:28.026716 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 18:30:28.026731 kernel: ima: Allocated hash algorithm: sha1 May 27 18:30:28.026744 kernel: ima: No architecture policies found May 27 18:30:28.026757 kernel: clk: Disabling unused clocks May 27 18:30:28.026767 kernel: Warning: unable to open an initial console. May 27 18:30:28.026777 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 18:30:28.026787 kernel: Write protecting the kernel read-only data: 24576k May 27 18:30:28.026796 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 18:30:28.026807 kernel: Run /init as init process May 27 18:30:28.026816 kernel: with arguments: May 27 18:30:28.026826 kernel: /init May 27 18:30:28.026835 kernel: with environment: May 27 18:30:28.026849 kernel: HOME=/ May 27 18:30:28.026859 kernel: TERM=linux May 27 18:30:28.026868 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 18:30:28.026880 systemd[1]: Successfully made /usr/ read-only. May 27 18:30:28.026895 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 18:30:28.026905 systemd[1]: Detected virtualization kvm. May 27 18:30:28.026915 systemd[1]: Detected architecture x86-64. May 27 18:30:28.026928 systemd[1]: Running in initrd. May 27 18:30:28.026938 systemd[1]: No hostname configured, using default hostname. May 27 18:30:28.026948 systemd[1]: Hostname set to . May 27 18:30:28.026975 systemd[1]: Initializing machine ID from VM UUID. May 27 18:30:28.026986 systemd[1]: Queued start job for default target initrd.target. May 27 18:30:28.026996 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:30:28.027007 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:30:28.027018 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 18:30:28.027033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 18:30:28.027042 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 18:30:28.027057 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 18:30:28.027068 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 18:30:28.027081 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 18:30:28.027091 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:30:28.027102 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 18:30:28.027112 systemd[1]: Reached target paths.target - Path Units. May 27 18:30:28.027122 systemd[1]: Reached target slices.target - Slice Units. May 27 18:30:28.027132 systemd[1]: Reached target swap.target - Swaps. May 27 18:30:28.027143 systemd[1]: Reached target timers.target - Timer Units. May 27 18:30:28.027153 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 18:30:28.027166 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 18:30:28.027177 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 18:30:28.027187 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 18:30:28.027197 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 18:30:28.027207 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 18:30:28.027217 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:30:28.027227 systemd[1]: Reached target sockets.target - Socket Units. May 27 18:30:28.027237 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 18:30:28.027247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 18:30:28.027261 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 18:30:28.027272 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 18:30:28.027282 systemd[1]: Starting systemd-fsck-usr.service... May 27 18:30:28.027292 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 18:30:28.027302 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 18:30:28.027312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:30:28.027322 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 18:30:28.027337 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:30:28.027347 systemd[1]: Finished systemd-fsck-usr.service. May 27 18:30:28.027398 systemd-journald[211]: Collecting audit messages is disabled. May 27 18:30:28.027428 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 18:30:28.027439 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 18:30:28.027451 systemd-journald[211]: Journal started May 27 18:30:28.027476 systemd-journald[211]: Runtime Journal (/run/log/journal/31ee8156a08548e99fdcf00bef40df42) is 4.9M, max 39.5M, 34.6M free. May 27 18:30:27.985290 systemd-modules-load[213]: Inserted module 'overlay' May 27 18:30:28.029595 systemd[1]: Started systemd-journald.service - Journal Service. May 27 18:30:28.036987 kernel: Bridge firewalling registered May 27 18:30:28.037808 systemd-modules-load[213]: Inserted module 'br_netfilter' May 27 18:30:28.040223 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 18:30:28.042418 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 18:30:28.072879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:30:28.079359 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:30:28.087223 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 18:30:28.091185 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 18:30:28.092200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:30:28.099282 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 18:30:28.103122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:30:28.122883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:30:28.129070 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:30:28.135207 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 18:30:28.139123 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 18:30:28.142181 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 18:30:28.176701 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:30:28.206059 systemd-resolved[247]: Positive Trust Anchors: May 27 18:30:28.206086 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 18:30:28.206152 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 18:30:28.211318 systemd-resolved[247]: Defaulting to hostname 'linux'. May 27 18:30:28.214795 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 18:30:28.216050 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 18:30:28.333030 kernel: SCSI subsystem initialized May 27 18:30:28.346999 kernel: Loading iSCSI transport class v2.0-870. May 27 18:30:28.364016 kernel: iscsi: registered transport (tcp) May 27 18:30:28.397086 kernel: iscsi: registered transport (qla4xxx) May 27 18:30:28.397203 kernel: QLogic iSCSI HBA Driver May 27 18:30:28.433623 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 18:30:28.458925 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:30:28.460445 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 18:30:28.531246 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 18:30:28.534394 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 18:30:28.598072 kernel: raid6: avx2x4 gen() 15987 MB/s May 27 18:30:28.615038 kernel: raid6: avx2x2 gen() 16029 MB/s May 27 18:30:28.632446 kernel: raid6: avx2x1 gen() 12169 MB/s May 27 18:30:28.632551 kernel: raid6: using algorithm avx2x2 gen() 16029 MB/s May 27 18:30:28.650499 kernel: raid6: .... xor() 17252 MB/s, rmw enabled May 27 18:30:28.650619 kernel: raid6: using avx2x2 recovery algorithm May 27 18:30:28.676032 kernel: xor: automatically using best checksumming function avx May 27 18:30:28.886026 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 18:30:28.897802 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 18:30:28.901423 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:30:28.935519 systemd-udevd[459]: Using default interface naming scheme 'v255'. May 27 18:30:28.943880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:30:28.948883 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 18:30:28.990354 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation May 27 18:30:29.033165 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 18:30:29.037204 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 18:30:29.122679 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:30:29.127651 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 18:30:29.212393 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 27 18:30:29.219009 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 27 18:30:29.222012 kernel: scsi host0: Virtio SCSI HBA May 27 18:30:29.225006 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 27 18:30:29.254951 kernel: libata version 3.00 loaded. May 27 18:30:29.265642 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 18:30:29.265769 kernel: GPT:9289727 != 125829119 May 27 18:30:29.265792 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 18:30:29.265814 kernel: GPT:9289727 != 125829119 May 27 18:30:29.265830 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 18:30:29.265850 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:30:29.268010 kernel: ata_piix 0000:00:01.1: version 2.13 May 27 18:30:29.271991 kernel: scsi host1: ata_piix May 27 18:30:29.276982 kernel: scsi host2: ata_piix May 27 18:30:29.277433 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 27 18:30:29.277464 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 27 18:30:29.285300 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 27 18:30:29.289391 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) May 27 18:30:29.309104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:30:29.310402 kernel: cryptd: max_cpu_qlen set to 1000 May 27 18:30:29.309574 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:30:29.312020 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:30:29.315315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:30:29.320465 kernel: ACPI: bus type USB registered May 27 18:30:29.320180 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:30:29.330099 kernel: usbcore: registered new interface driver usbfs May 27 18:30:29.330147 kernel: usbcore: registered new interface driver hub May 27 18:30:29.330167 kernel: usbcore: registered new device driver usb May 27 18:30:29.410724 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:30:29.473061 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 27 18:30:29.493180 kernel: AES CTR mode by8 optimization enabled May 27 18:30:29.555726 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 27 18:30:29.556555 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 27 18:30:29.556826 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 27 18:30:29.555786 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 18:30:29.559110 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 27 18:30:29.566448 kernel: hub 1-0:1.0: USB hub found May 27 18:30:29.566773 kernel: hub 1-0:1.0: 2 ports detected May 27 18:30:29.600720 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 18:30:29.609119 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 18:30:29.618924 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 18:30:29.619604 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 18:30:29.636377 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 18:30:29.637837 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 18:30:29.638555 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:30:29.639994 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 18:30:29.643215 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 18:30:29.647243 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 18:30:29.678104 disk-uuid[614]: Primary Header is updated. May 27 18:30:29.678104 disk-uuid[614]: Secondary Entries is updated. May 27 18:30:29.678104 disk-uuid[614]: Secondary Header is updated. May 27 18:30:29.684516 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 18:30:29.694007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:30:30.704067 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:30:30.706021 disk-uuid[620]: The operation has completed successfully. May 27 18:30:30.782981 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 18:30:30.783110 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 18:30:30.811223 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 18:30:30.839632 sh[633]: Success May 27 18:30:30.862044 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 18:30:30.862156 kernel: device-mapper: uevent: version 1.0.3 May 27 18:30:30.863540 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 18:30:30.877008 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 27 18:30:30.954160 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 18:30:30.960132 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 18:30:30.977557 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 18:30:30.994997 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 18:30:30.996995 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (645) May 27 18:30:31.000496 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 18:30:31.000625 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 18:30:31.000651 kernel: BTRFS info (device dm-0): using free-space-tree May 27 18:30:31.014126 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 18:30:31.014824 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 18:30:31.015485 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 18:30:31.018211 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 18:30:31.022161 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 18:30:31.064025 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (677) May 27 18:30:31.069051 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:30:31.069171 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:30:31.069196 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:30:31.083001 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:30:31.085046 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 18:30:31.088221 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 18:30:31.230580 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 18:30:31.239514 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 18:30:31.333917 ignition[727]: Ignition 2.21.0 May 27 18:30:31.333937 ignition[727]: Stage: fetch-offline May 27 18:30:31.334091 ignition[727]: no configs at "/usr/lib/ignition/base.d" May 27 18:30:31.335197 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:30:31.335506 ignition[727]: parsed url from cmdline: "" May 27 18:30:31.335514 ignition[727]: no config URL provided May 27 18:30:31.335547 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" May 27 18:30:31.335571 ignition[727]: no config at "/usr/lib/ignition/user.ign" May 27 18:30:31.342635 systemd-networkd[817]: lo: Link UP May 27 18:30:31.335582 ignition[727]: failed to fetch config: resource requires networking May 27 18:30:31.342644 systemd-networkd[817]: lo: Gained carrier May 27 18:30:31.336083 ignition[727]: Ignition finished successfully May 27 18:30:31.343129 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 18:30:31.347249 systemd-networkd[817]: Enumeration completed May 27 18:30:31.347851 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 27 18:30:31.347859 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 27 18:30:31.348775 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 18:30:31.349069 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:30:31.349077 systemd-networkd[817]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 18:30:31.350216 systemd-networkd[817]: eth0: Link UP May 27 18:30:31.350222 systemd-networkd[817]: eth0: Gained carrier May 27 18:30:31.350239 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 27 18:30:31.350938 systemd[1]: Reached target network.target - Network. May 27 18:30:31.354681 systemd-networkd[817]: eth1: Link UP May 27 18:30:31.354689 systemd-networkd[817]: eth1: Gained carrier May 27 18:30:31.354717 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:30:31.356809 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 18:30:31.369115 systemd-networkd[817]: eth0: DHCPv4 address 146.190.128.44/20, gateway 146.190.128.1 acquired from 169.254.169.253 May 27 18:30:31.375113 systemd-networkd[817]: eth1: DHCPv4 address 10.124.0.33/20 acquired from 169.254.169.253 May 27 18:30:31.406805 ignition[824]: Ignition 2.21.0 May 27 18:30:31.407025 ignition[824]: Stage: fetch May 27 18:30:31.407367 ignition[824]: no configs at "/usr/lib/ignition/base.d" May 27 18:30:31.407389 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:30:31.407561 ignition[824]: parsed url from cmdline: "" May 27 18:30:31.407569 ignition[824]: no config URL provided May 27 18:30:31.407579 ignition[824]: reading system config file "/usr/lib/ignition/user.ign" May 27 18:30:31.407599 ignition[824]: no config at "/usr/lib/ignition/user.ign" May 27 18:30:31.407693 ignition[824]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 27 18:30:31.424236 ignition[824]: GET result: OK May 27 18:30:31.424387 ignition[824]: parsing config with SHA512: b7974fdc25f46c0c293f63c0a8f89e3d9a46854ba769f4749297dd45e05ba25bf368410768b5dc15195feff8e2f94c910dff73575b1ab14638d4debf07ac7b84 May 27 18:30:31.429540 unknown[824]: fetched base config from "system" May 27 18:30:31.429742 unknown[824]: fetched base config from "system" May 27 18:30:31.430057 ignition[824]: fetch: fetch complete May 27 18:30:31.429755 unknown[824]: fetched user config from "digitalocean" May 27 18:30:31.430063 ignition[824]: fetch: fetch passed May 27 18:30:31.430142 ignition[824]: Ignition finished successfully May 27 18:30:31.434764 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 18:30:31.442343 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 18:30:31.498473 ignition[832]: Ignition 2.21.0 May 27 18:30:31.498498 ignition[832]: Stage: kargs May 27 18:30:31.498782 ignition[832]: no configs at "/usr/lib/ignition/base.d" May 27 18:30:31.498803 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:30:31.500475 ignition[832]: kargs: kargs passed May 27 18:30:31.503008 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 18:30:31.500598 ignition[832]: Ignition finished successfully May 27 18:30:31.505670 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 18:30:31.554698 ignition[838]: Ignition 2.21.0 May 27 18:30:31.554722 ignition[838]: Stage: disks May 27 18:30:31.555016 ignition[838]: no configs at "/usr/lib/ignition/base.d" May 27 18:30:31.555034 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:30:31.556610 ignition[838]: disks: disks passed May 27 18:30:31.556709 ignition[838]: Ignition finished successfully May 27 18:30:31.560207 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 18:30:31.561806 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 18:30:31.562542 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 18:30:31.563644 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 18:30:31.564886 systemd[1]: Reached target sysinit.target - System Initialization. May 27 18:30:31.565713 systemd[1]: Reached target basic.target - Basic System. May 27 18:30:31.568720 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 18:30:31.605399 systemd-fsck[847]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 18:30:31.608785 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 18:30:31.612393 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 18:30:31.757991 kernel: EXT4-fs (vda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 18:30:31.759525 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 18:30:31.761413 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 18:30:31.764613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 18:30:31.767067 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 18:30:31.775871 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 27 18:30:31.780181 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 18:30:31.783181 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 18:30:31.783335 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 18:30:31.794207 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 18:30:31.795024 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (855) May 27 18:30:31.799997 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:30:31.802130 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:30:31.802240 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:30:31.802288 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 18:30:31.838350 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 18:30:31.914934 coreos-metadata[857]: May 27 18:30:31.914 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:30:31.917080 coreos-metadata[858]: May 27 18:30:31.916 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:30:31.918221 initrd-setup-root[885]: cut: /sysroot/etc/passwd: No such file or directory May 27 18:30:31.926052 initrd-setup-root[892]: cut: /sysroot/etc/group: No such file or directory May 27 18:30:31.928257 coreos-metadata[857]: May 27 18:30:31.928 INFO Fetch successful May 27 18:30:31.929288 coreos-metadata[858]: May 27 18:30:31.928 INFO Fetch successful May 27 18:30:31.935053 coreos-metadata[858]: May 27 18:30:31.934 INFO wrote hostname ci-4344.0.0-1-cb46b2958a to /sysroot/etc/hostname May 27 18:30:31.936211 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 27 18:30:31.936341 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 27 18:30:31.938003 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 18:30:31.941171 initrd-setup-root[899]: cut: /sysroot/etc/shadow: No such file or directory May 27 18:30:31.948092 initrd-setup-root[908]: cut: /sysroot/etc/gshadow: No such file or directory May 27 18:30:32.092998 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 18:30:32.095153 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 18:30:32.098161 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 18:30:32.125559 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 18:30:32.126404 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:30:32.153041 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 18:30:32.170998 ignition[977]: INFO : Ignition 2.21.0 May 27 18:30:32.170998 ignition[977]: INFO : Stage: mount May 27 18:30:32.170998 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:30:32.170998 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:30:32.174198 ignition[977]: INFO : mount: mount passed May 27 18:30:32.174198 ignition[977]: INFO : Ignition finished successfully May 27 18:30:32.178275 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 18:30:32.181371 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 18:30:32.215301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 18:30:32.248088 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (988) May 27 18:30:32.252404 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:30:32.252566 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:30:32.252617 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:30:32.262861 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 18:30:32.315347 ignition[1004]: INFO : Ignition 2.21.0 May 27 18:30:32.315347 ignition[1004]: INFO : Stage: files May 27 18:30:32.318206 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:30:32.318206 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:30:32.318206 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping May 27 18:30:32.321405 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 18:30:32.321405 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 18:30:32.329868 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 18:30:32.332081 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 18:30:32.333733 unknown[1004]: wrote ssh authorized keys file for user: core May 27 18:30:32.334811 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 18:30:32.337548 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 27 18:30:32.338602 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 27 18:30:32.340364 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 18:30:32.341591 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 18:30:32.341591 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 18:30:32.343340 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 18:30:32.343340 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 18:30:32.343340 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 18:30:32.482434 systemd-networkd[817]: eth0: Gained IPv6LL May 27 18:30:33.059200 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 27 18:30:33.315478 systemd-networkd[817]: eth1: Gained IPv6LL May 27 18:30:33.416299 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 18:30:33.417897 ignition[1004]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 18:30:33.417897 ignition[1004]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 18:30:33.417897 ignition[1004]: INFO : files: files passed May 27 18:30:33.417897 ignition[1004]: INFO : Ignition finished successfully May 27 18:30:33.419369 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 18:30:33.422948 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 18:30:33.425143 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 18:30:33.445705 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 18:30:33.445951 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 18:30:33.461308 initrd-setup-root-after-ignition[1035]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 18:30:33.461308 initrd-setup-root-after-ignition[1035]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 18:30:33.465711 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 18:30:33.469973 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 18:30:33.470778 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 18:30:33.473159 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 18:30:33.542712 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 18:30:33.543016 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 18:30:33.545179 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 18:30:33.545725 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 18:30:33.546956 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 18:30:33.548763 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 18:30:33.586263 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 18:30:33.589034 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 18:30:33.620752 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 18:30:33.622737 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:30:33.623605 systemd[1]: Stopped target timers.target - Timer Units. May 27 18:30:33.624756 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 18:30:33.625123 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 18:30:33.626705 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 18:30:33.627536 systemd[1]: Stopped target basic.target - Basic System. May 27 18:30:33.628387 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 18:30:33.629170 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 18:30:33.630226 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 18:30:33.631033 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 18:30:33.632135 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 18:30:33.632878 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 18:30:33.634084 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 18:30:33.634859 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 18:30:33.635863 systemd[1]: Stopped target swap.target - Swaps. May 27 18:30:33.636868 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 18:30:33.637164 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 18:30:33.638859 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 18:30:33.639566 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:30:33.640776 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 18:30:33.641101 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:30:33.641886 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 18:30:33.642141 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 18:30:33.643862 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 18:30:33.644263 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 18:30:33.645214 systemd[1]: ignition-files.service: Deactivated successfully. May 27 18:30:33.645431 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 18:30:33.646365 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 18:30:33.646659 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 18:30:33.650401 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 18:30:33.653460 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 18:30:33.655375 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 18:30:33.655673 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:30:33.660192 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 18:30:33.660577 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 18:30:33.671546 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 18:30:33.675182 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 18:30:33.697023 ignition[1059]: INFO : Ignition 2.21.0 May 27 18:30:33.697023 ignition[1059]: INFO : Stage: umount May 27 18:30:33.697023 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:30:33.697023 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:30:33.705155 ignition[1059]: INFO : umount: umount passed May 27 18:30:33.705155 ignition[1059]: INFO : Ignition finished successfully May 27 18:30:33.705239 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 18:30:33.707876 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 18:30:33.708094 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 18:30:33.722639 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 18:30:33.722838 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 18:30:33.723561 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 18:30:33.723669 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 18:30:33.724562 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 18:30:33.724645 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 18:30:33.725169 systemd[1]: Stopped target network.target - Network. May 27 18:30:33.729423 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 18:30:33.729580 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 18:30:33.730340 systemd[1]: Stopped target paths.target - Path Units. May 27 18:30:33.731097 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 18:30:33.731324 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:30:33.737113 systemd[1]: Stopped target slices.target - Slice Units. May 27 18:30:33.743362 systemd[1]: Stopped target sockets.target - Socket Units. May 27 18:30:33.744420 systemd[1]: iscsid.socket: Deactivated successfully. May 27 18:30:33.744485 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 18:30:33.745161 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 18:30:33.745224 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 18:30:33.749895 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 18:30:33.750072 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 18:30:33.751510 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 18:30:33.751616 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 18:30:33.753030 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 18:30:33.754182 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 18:30:33.756376 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 18:30:33.756569 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 18:30:33.758306 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 18:30:33.758486 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 18:30:33.763181 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 18:30:33.764017 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 18:30:33.771913 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 18:30:33.772595 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 18:30:33.772818 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 18:30:33.775186 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 18:30:33.776580 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 18:30:33.777777 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 18:30:33.777850 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 18:30:33.781202 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 18:30:33.781696 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 18:30:33.781793 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 18:30:33.783481 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 18:30:33.783584 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 18:30:33.786294 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 18:30:33.786396 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 18:30:33.788760 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 18:30:33.788877 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:30:33.791142 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:30:33.795568 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 18:30:33.795760 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 18:30:33.812467 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 18:30:33.814460 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:30:33.816455 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 18:30:33.816533 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 18:30:33.818459 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 18:30:33.818532 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:30:33.819236 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 18:30:33.819328 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 18:30:33.819992 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 18:30:33.820064 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 18:30:33.820609 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 18:30:33.820684 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 18:30:33.822631 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 18:30:33.824272 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 18:30:33.824468 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:30:33.826931 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 18:30:33.828122 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:30:33.829733 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 18:30:33.829835 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:30:33.832119 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 18:30:33.832203 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:30:33.833838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:30:33.833936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:30:33.838132 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 18:30:33.838276 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 18:30:33.838340 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 18:30:33.838405 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:30:33.841047 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 18:30:33.841305 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 18:30:33.853236 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 18:30:33.853446 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 18:30:33.855341 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 18:30:33.857386 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 18:30:33.881932 systemd[1]: Switching root. May 27 18:30:33.926126 systemd-journald[211]: Journal stopped May 27 18:30:35.420134 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). May 27 18:30:35.420274 kernel: SELinux: policy capability network_peer_controls=1 May 27 18:30:35.420302 kernel: SELinux: policy capability open_perms=1 May 27 18:30:35.420326 kernel: SELinux: policy capability extended_socket_class=1 May 27 18:30:35.420349 kernel: SELinux: policy capability always_check_network=0 May 27 18:30:35.420379 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 18:30:35.420403 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 18:30:35.420432 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 18:30:35.420452 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 18:30:35.420470 kernel: SELinux: policy capability userspace_initial_context=0 May 27 18:30:35.420490 kernel: audit: type=1403 audit(1748370634.059:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 18:30:35.420513 systemd[1]: Successfully loaded SELinux policy in 42.406ms. May 27 18:30:35.420556 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.592ms. May 27 18:30:35.420572 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 18:30:35.420588 systemd[1]: Detected virtualization kvm. May 27 18:30:35.420607 systemd[1]: Detected architecture x86-64. May 27 18:30:35.420621 systemd[1]: Detected first boot. May 27 18:30:35.420636 systemd[1]: Hostname set to . May 27 18:30:35.420656 systemd[1]: Initializing machine ID from VM UUID. May 27 18:30:35.420671 zram_generator::config[1104]: No configuration found. May 27 18:30:35.420686 kernel: Guest personality initialized and is inactive May 27 18:30:35.420700 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 18:30:35.420714 kernel: Initialized host personality May 27 18:30:35.420727 kernel: NET: Registered PF_VSOCK protocol family May 27 18:30:35.420745 systemd[1]: Populated /etc with preset unit settings. May 27 18:30:35.420787 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 18:30:35.420802 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 18:30:35.420816 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 18:30:35.420830 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 18:30:35.420851 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 18:30:35.420867 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 18:30:35.420881 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 18:30:35.420895 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 18:30:35.420914 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 18:30:35.420929 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 18:30:35.420949 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 18:30:35.421016 systemd[1]: Created slice user.slice - User and Session Slice. May 27 18:30:35.421039 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:30:35.421064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:30:35.421085 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 18:30:35.421119 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 18:30:35.421144 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 18:30:35.421170 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 18:30:35.421195 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 18:30:35.421220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:30:35.421245 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 18:30:35.421269 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 18:30:35.421290 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 18:30:35.421320 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 18:30:35.421341 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 18:30:35.421365 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:30:35.421383 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 18:30:35.421399 systemd[1]: Reached target slices.target - Slice Units. May 27 18:30:35.421421 systemd[1]: Reached target swap.target - Swaps. May 27 18:30:35.421449 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 18:30:35.421471 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 18:30:35.421493 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 18:30:35.421527 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 18:30:35.421549 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 18:30:35.421573 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:30:35.421597 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 18:30:35.421622 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 18:30:35.421647 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 18:30:35.421668 systemd[1]: Mounting media.mount - External Media Directory... May 27 18:30:35.421692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:30:35.421716 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 18:30:35.421743 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 18:30:35.421765 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 18:30:35.421788 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 18:30:35.421811 systemd[1]: Reached target machines.target - Containers. May 27 18:30:35.421837 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 18:30:35.421861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:30:35.421885 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 18:30:35.421910 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 18:30:35.421939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:30:35.422000 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 18:30:35.422026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:30:35.422050 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 18:30:35.422076 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:30:35.422101 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 18:30:35.422121 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 18:30:35.422142 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 18:30:35.422191 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 18:30:35.422218 systemd[1]: Stopped systemd-fsck-usr.service. May 27 18:30:35.422249 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:30:35.422275 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 18:30:35.422300 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 18:30:35.422323 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 18:30:35.422350 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 18:30:35.422373 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 18:30:35.422394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 18:30:35.422415 systemd[1]: verity-setup.service: Deactivated successfully. May 27 18:30:35.422437 systemd[1]: Stopped verity-setup.service. May 27 18:30:35.422465 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:30:35.422488 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 18:30:35.422510 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 18:30:35.422539 systemd[1]: Mounted media.mount - External Media Directory. May 27 18:30:35.422564 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 18:30:35.422589 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 18:30:35.422615 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 18:30:35.422640 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:30:35.422671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:30:35.422695 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:30:35.422720 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 18:30:35.422742 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 18:30:35.422768 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 18:30:35.422793 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 18:30:35.422819 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 18:30:35.422858 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 18:30:35.422884 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:30:35.422914 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 18:30:35.422938 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:30:35.427538 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 18:30:35.427617 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 18:30:35.427645 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 18:30:35.427681 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 18:30:35.427703 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 18:30:35.427726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:30:35.427764 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 18:30:35.427861 systemd-journald[1173]: Collecting audit messages is disabled. May 27 18:30:35.427913 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 18:30:35.427940 systemd-journald[1173]: Journal started May 27 18:30:35.429048 systemd-journald[1173]: Runtime Journal (/run/log/journal/31ee8156a08548e99fdcf00bef40df42) is 4.9M, max 39.5M, 34.6M free. May 27 18:30:35.001823 systemd[1]: Queued start job for default target multi-user.target. May 27 18:30:35.024567 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 18:30:35.025365 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 18:30:35.442093 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 18:30:35.442211 systemd[1]: Started systemd-journald.service - Journal Service. May 27 18:30:35.454407 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:30:35.458416 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:30:35.497329 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 18:30:35.498005 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:30:35.512915 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 18:30:35.525574 kernel: fuse: init (API version 7.41) May 27 18:30:35.525056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:30:35.545038 kernel: loop: module loaded May 27 18:30:35.533147 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 18:30:35.537074 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 27 18:30:35.537093 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 27 18:30:35.540369 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 18:30:35.545322 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 18:30:35.547309 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:30:35.570176 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 18:30:35.570479 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 18:30:35.571577 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:30:35.571855 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:30:35.587005 kernel: loop0: detected capacity change from 0 to 146240 May 27 18:30:35.586451 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:30:37.594278 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2135251405 wd_nsec: 2135250687 May 27 18:30:37.594457 kernel: ACPI: bus type drm_connector registered May 27 18:30:37.595111 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 18:30:37.600506 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 18:30:37.600750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 18:30:37.611125 systemd-journald[1173]: Time spent on flushing to /var/log/journal/31ee8156a08548e99fdcf00bef40df42 is 90.016ms for 1002 entries. May 27 18:30:37.611125 systemd-journald[1173]: System Journal (/var/log/journal/31ee8156a08548e99fdcf00bef40df42) is 8M, max 195.6M, 187.6M free. May 27 18:30:37.736725 systemd-journald[1173]: Received client request to flush runtime journal. May 27 18:30:37.736831 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 18:30:37.736857 kernel: loop1: detected capacity change from 0 to 229808 May 27 18:30:37.740987 kernel: loop2: detected capacity change from 0 to 113872 May 27 18:30:37.617836 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 18:30:37.623259 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 18:30:37.627420 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 18:30:37.631081 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 18:30:37.633762 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 18:30:37.742239 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 18:30:37.761136 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:30:37.793121 kernel: loop3: detected capacity change from 0 to 8 May 27 18:30:37.802102 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 18:30:37.810780 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 18:30:37.831999 kernel: loop4: detected capacity change from 0 to 146240 May 27 18:30:37.866498 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 27 18:30:37.869510 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 27 18:30:37.895745 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:30:37.895980 kernel: loop5: detected capacity change from 0 to 229808 May 27 18:30:37.952005 kernel: loop6: detected capacity change from 0 to 113872 May 27 18:30:38.007999 kernel: loop7: detected capacity change from 0 to 8 May 27 18:30:38.016208 (sd-merge)[1253]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 27 18:30:38.016885 (sd-merge)[1253]: Merged extensions into '/usr'. May 27 18:30:38.029222 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... May 27 18:30:38.029247 systemd[1]: Reloading... May 27 18:30:38.176006 zram_generator::config[1277]: No configuration found. May 27 18:30:38.454287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:30:38.486974 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 18:30:38.556778 systemd[1]: Reloading finished in 524 ms. May 27 18:30:38.581809 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 18:30:38.588771 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 18:30:38.596162 systemd[1]: Starting ensure-sysext.service... May 27 18:30:38.604199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 18:30:38.631443 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... May 27 18:30:38.631470 systemd[1]: Reloading... May 27 18:30:38.682662 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 18:30:38.682698 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 18:30:38.683051 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 18:30:38.683325 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 18:30:38.686579 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 18:30:38.687039 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. May 27 18:30:38.687135 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. May 27 18:30:38.698187 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. May 27 18:30:38.698202 systemd-tmpfiles[1326]: Skipping /boot May 27 18:30:38.734423 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. May 27 18:30:38.734437 systemd-tmpfiles[1326]: Skipping /boot May 27 18:30:38.796995 zram_generator::config[1353]: No configuration found. May 27 18:30:38.911167 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:30:39.012290 systemd[1]: Reloading finished in 380 ms. May 27 18:30:39.033123 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 18:30:39.039781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:30:39.050178 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 18:30:39.052415 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 18:30:39.055214 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 18:30:39.061236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 18:30:39.066812 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:30:39.072271 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 18:30:39.078944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:30:39.080112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:30:39.082313 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:30:39.085334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:30:39.086911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:30:39.087514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:30:39.087643 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:30:39.087758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:30:39.093686 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:30:39.093896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:30:39.095309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:30:39.095431 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:30:39.095526 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:30:39.099729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:30:39.100941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:30:39.119342 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 18:30:39.119941 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:30:39.120086 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:30:39.120225 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:30:39.125311 systemd[1]: Finished ensure-sysext.service. May 27 18:30:39.141873 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 18:30:39.149927 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 18:30:39.151796 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 18:30:39.153275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:30:39.153550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:30:39.154790 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:30:39.155170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:30:39.161853 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:30:39.163306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:30:39.163577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:30:39.165646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:30:39.170117 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 18:30:39.171456 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 18:30:39.171899 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 18:30:39.179178 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 18:30:39.209203 systemd-udevd[1402]: Using default interface naming scheme 'v255'. May 27 18:30:39.217101 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 18:30:39.219687 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 18:30:39.223924 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 18:30:39.237400 augenrules[1439]: No rules May 27 18:30:39.238458 systemd[1]: audit-rules.service: Deactivated successfully. May 27 18:30:39.239332 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 18:30:39.252128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:30:39.257720 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 18:30:39.269690 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 18:30:39.405721 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 27 18:30:39.408848 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 27 18:30:39.409408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:30:39.409608 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:30:39.412000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:30:39.414294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:30:39.417542 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:30:39.418016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:30:39.418053 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:30:39.418085 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 18:30:39.418102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:30:39.456546 kernel: ISO 9660 Extensions: RRIP_1991A May 27 18:30:39.456366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:30:39.457105 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:30:39.458501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:30:39.459152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:30:39.467570 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 27 18:30:39.469298 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:30:39.469537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:30:39.474329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:30:39.474398 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:30:39.520074 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 18:30:39.573482 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 18:30:39.576164 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 18:30:39.648337 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 18:30:39.667849 systemd-networkd[1453]: lo: Link UP May 27 18:30:39.667859 systemd-networkd[1453]: lo: Gained carrier May 27 18:30:39.671942 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 18:30:39.672731 systemd[1]: Reached target time-set.target - System Time Set. May 27 18:30:39.676389 systemd-networkd[1453]: Enumeration completed May 27 18:30:39.676546 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 18:30:39.676835 systemd-networkd[1453]: eth0: Configuring with /run/systemd/network/10-ce:e6:a9:86:16:d3.network. May 27 18:30:39.680106 systemd-timesyncd[1417]: No network connectivity, watching for changes. May 27 18:30:39.682410 systemd-networkd[1453]: eth1: Configuring with /run/systemd/network/10-e6:cc:f2:f3:9a:01.network. May 27 18:30:39.683768 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 18:30:39.686217 systemd-networkd[1453]: eth0: Link UP May 27 18:30:39.686986 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 18:30:39.690134 systemd-networkd[1453]: eth0: Gained carrier May 27 18:30:39.694536 systemd-networkd[1453]: eth1: Link UP May 27 18:30:39.695631 systemd-networkd[1453]: eth1: Gained carrier May 27 18:30:39.702212 systemd-resolved[1401]: Positive Trust Anchors: May 27 18:30:39.702228 systemd-resolved[1401]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 18:30:39.702264 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. May 27 18:30:39.702271 systemd-resolved[1401]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 18:30:39.713168 systemd-resolved[1401]: Using system hostname 'ci-4344.0.0-1-cb46b2958a'. May 27 18:30:39.720762 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 18:30:39.721317 systemd[1]: Reached target network.target - Network. May 27 18:30:39.721686 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 18:30:39.722375 systemd[1]: Reached target sysinit.target - System Initialization. May 27 18:30:39.727366 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 18:30:39.727993 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 18:30:39.728775 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 18:30:39.729829 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 18:30:39.730579 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 18:30:39.731534 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 18:30:39.732015 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 18:30:39.732051 systemd[1]: Reached target paths.target - Path Units. May 27 18:30:39.732674 systemd[1]: Reached target timers.target - Timer Units. May 27 18:30:39.735361 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 18:30:39.738589 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 18:30:39.745786 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 18:30:39.748022 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 18:30:39.749095 kernel: mousedev: PS/2 mouse device common for all mice May 27 18:30:39.749699 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 18:30:39.758318 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 18:30:39.760812 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 18:30:39.764000 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 18:30:39.765320 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 18:30:39.771678 systemd[1]: Reached target sockets.target - Socket Units. May 27 18:30:39.772540 systemd[1]: Reached target basic.target - Basic System. May 27 18:30:39.773141 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 18:30:39.773180 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 18:30:39.776555 systemd[1]: Starting containerd.service - containerd container runtime... May 27 18:30:39.781220 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 18:30:39.785993 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 27 18:30:39.788267 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 18:30:39.790000 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 27 18:30:39.790388 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 18:30:39.794580 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 18:30:39.797051 kernel: ACPI: button: Power Button [PWRF] May 27 18:30:39.797956 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 18:30:39.802313 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 18:30:39.802828 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 18:30:39.807343 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 18:30:39.816038 jq[1512]: false May 27 18:30:39.815376 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 18:30:39.829166 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 18:30:39.836625 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing passwd entry cache May 27 18:30:39.837103 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 18:30:39.837611 oslogin_cache_refresh[1514]: Refreshing passwd entry cache May 27 18:30:39.843991 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting users, quitting May 27 18:30:39.843991 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 18:30:39.843991 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing group entry cache May 27 18:30:39.843145 oslogin_cache_refresh[1514]: Failure getting users, quitting May 27 18:30:39.843170 oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 18:30:39.843223 oslogin_cache_refresh[1514]: Refreshing group entry cache May 27 18:30:39.845159 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting groups, quitting May 27 18:30:39.845159 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 18:30:39.844911 oslogin_cache_refresh[1514]: Failure getting groups, quitting May 27 18:30:39.844924 oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 18:30:39.849585 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 18:30:39.853505 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 18:30:39.858166 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 18:30:39.859544 systemd[1]: Starting update-engine.service - Update Engine... May 27 18:30:39.866285 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 18:30:39.879314 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 18:30:39.880501 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 18:30:39.880811 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 18:30:39.881433 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 18:30:39.883064 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 18:30:40.711522 systemd-timesyncd[1417]: Contacted time server 45.77.126.122:123 (1.flatcar.pool.ntp.org). May 27 18:30:40.711590 systemd-timesyncd[1417]: Initial clock synchronization to Tue 2025-05-27 18:30:40.711069 UTC. May 27 18:30:40.712249 systemd-resolved[1401]: Clock change detected. Flushing caches. May 27 18:30:40.749533 coreos-metadata[1509]: May 27 18:30:40.749 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:30:40.758167 update_engine[1519]: I20250527 18:30:40.758039 1519 main.cc:92] Flatcar Update Engine starting May 27 18:30:40.760872 coreos-metadata[1509]: May 27 18:30:40.760 INFO Fetch successful May 27 18:30:40.765490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 18:30:40.765760 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 18:30:40.795030 jq[1520]: true May 27 18:30:40.796459 dbus-daemon[1510]: [system] SELinux support is enabled May 27 18:30:40.804226 update_engine[1519]: I20250527 18:30:40.800806 1519 update_check_scheduler.cc:74] Next update check in 11m6s May 27 18:30:40.802306 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 18:30:40.806826 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 18:30:40.806887 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 18:30:40.807421 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 18:30:40.807504 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 27 18:30:40.807522 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 18:30:40.809106 systemd[1]: Started update-engine.service - Update Engine. May 27 18:30:40.844337 extend-filesystems[1513]: Found loop4 May 27 18:30:40.844337 extend-filesystems[1513]: Found loop5 May 27 18:30:40.844337 extend-filesystems[1513]: Found loop6 May 27 18:30:40.844337 extend-filesystems[1513]: Found loop7 May 27 18:30:40.844337 extend-filesystems[1513]: Found vda May 27 18:30:40.844337 extend-filesystems[1513]: Found vda1 May 27 18:30:40.844337 extend-filesystems[1513]: Found vda2 May 27 18:30:40.844337 extend-filesystems[1513]: Found vda3 May 27 18:30:40.844337 extend-filesystems[1513]: Found usr May 27 18:30:40.844337 extend-filesystems[1513]: Found vda4 May 27 18:30:40.844337 extend-filesystems[1513]: Found vda6 May 27 18:30:40.844337 extend-filesystems[1513]: Found vda7 May 27 18:30:40.844337 extend-filesystems[1513]: Found vda9 May 27 18:30:40.844337 extend-filesystems[1513]: Checking size of /dev/vda9 May 27 18:30:40.894608 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 27 18:30:40.894663 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 27 18:30:40.894975 kernel: Console: switching to colour dummy device 80x25 May 27 18:30:40.895045 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 27 18:30:40.895075 kernel: [drm] features: -context_init May 27 18:30:40.895100 kernel: [drm] number of scanouts: 1 May 27 18:30:40.895120 kernel: [drm] number of cap sets: 0 May 27 18:30:40.895135 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 27 18:30:40.845582 (ntainerd)[1535]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 18:30:40.889477 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 18:30:40.893654 systemd[1]: motdgen.service: Deactivated successfully. May 27 18:30:40.900414 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 18:30:40.929089 jq[1539]: true May 27 18:30:40.930283 extend-filesystems[1513]: Resized partition /dev/vda9 May 27 18:30:40.932524 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 18:30:40.932954 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 18:30:40.954017 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 27 18:30:40.954144 extend-filesystems[1559]: resize2fs 1.47.2 (1-Jan-2025) May 27 18:30:41.123110 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 27 18:30:41.153498 extend-filesystems[1559]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 18:30:41.153498 extend-filesystems[1559]: old_desc_blocks = 1, new_desc_blocks = 8 May 27 18:30:41.153498 extend-filesystems[1559]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 27 18:30:41.154811 extend-filesystems[1513]: Resized filesystem in /dev/vda9 May 27 18:30:41.154811 extend-filesystems[1513]: Found vdb May 27 18:30:41.156576 bash[1583]: Updated "/home/core/.ssh/authorized_keys" May 27 18:30:41.156447 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 18:30:41.158339 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 18:30:41.160713 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 18:30:41.176381 systemd[1]: Starting sshkeys.service... May 27 18:30:41.250374 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 18:30:41.257212 systemd-logind[1518]: New seat seat0. May 27 18:30:41.263181 systemd[1]: Started systemd-logind.service - User Login Management. May 27 18:30:41.265103 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 18:30:41.268341 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 18:30:41.352554 coreos-metadata[1595]: May 27 18:30:41.352 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:30:41.371930 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 18:30:41.374081 coreos-metadata[1595]: May 27 18:30:41.374 INFO Fetch successful May 27 18:30:41.388212 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 18:30:41.392873 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:30:41.417568 unknown[1595]: wrote ssh authorized keys file for user: core May 27 18:30:41.459718 systemd[1]: issuegen.service: Deactivated successfully. May 27 18:30:41.460080 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 18:30:41.464080 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 18:30:41.494407 update-ssh-keys[1607]: Updated "/home/core/.ssh/authorized_keys" May 27 18:30:41.494787 containerd[1535]: time="2025-05-27T18:30:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 18:30:41.500839 containerd[1535]: time="2025-05-27T18:30:41.499227715Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 18:30:41.503597 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 18:30:41.507867 systemd[1]: Finished sshkeys.service. May 27 18:30:41.545368 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 18:30:41.551210 systemd-logind[1518]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 18:30:41.553821 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 18:30:41.557511 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 18:30:41.557883 systemd[1]: Reached target getty.target - Login Prompts. May 27 18:30:41.584716 containerd[1535]: time="2025-05-27T18:30:41.584661668Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.038µs" May 27 18:30:41.585014 containerd[1535]: time="2025-05-27T18:30:41.584948766Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.585553611Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.585781138Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.585801963Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.585831075Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.585897410Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.585912760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.586257142Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.586290600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.586303378Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.586312665Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.586446541Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 18:30:41.587028 containerd[1535]: time="2025-05-27T18:30:41.586788467Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 18:30:41.587537 containerd[1535]: time="2025-05-27T18:30:41.586838073Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 18:30:41.587537 containerd[1535]: time="2025-05-27T18:30:41.586853544Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 18:30:41.587537 containerd[1535]: time="2025-05-27T18:30:41.586940042Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 18:30:41.591505 containerd[1535]: time="2025-05-27T18:30:41.591453548Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 18:30:41.591776 containerd[1535]: time="2025-05-27T18:30:41.591749790Z" level=info msg="metadata content store policy set" policy=shared May 27 18:30:41.596607 containerd[1535]: time="2025-05-27T18:30:41.596551352Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 18:30:41.596841 containerd[1535]: time="2025-05-27T18:30:41.596817472Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 18:30:41.597014 containerd[1535]: time="2025-05-27T18:30:41.596975131Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 18:30:41.597099 containerd[1535]: time="2025-05-27T18:30:41.597084528Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 18:30:41.597168 containerd[1535]: time="2025-05-27T18:30:41.597154222Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 18:30:41.597254 containerd[1535]: time="2025-05-27T18:30:41.597238889Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 18:30:41.597324 containerd[1535]: time="2025-05-27T18:30:41.597310258Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 18:30:41.597749 containerd[1535]: time="2025-05-27T18:30:41.597716097Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 18:30:41.597852 containerd[1535]: time="2025-05-27T18:30:41.597836381Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 18:30:41.597923 containerd[1535]: time="2025-05-27T18:30:41.597907683Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 18:30:41.598060 containerd[1535]: time="2025-05-27T18:30:41.597971737Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 18:30:41.598185 containerd[1535]: time="2025-05-27T18:30:41.598163451Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 18:30:41.598415 containerd[1535]: time="2025-05-27T18:30:41.598389059Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 18:30:41.599329 containerd[1535]: time="2025-05-27T18:30:41.599011000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601103164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601165703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601196431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601219031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601233779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601251811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601270650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601296145Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601315666Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601543446Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601577272Z" level=info msg="Start snapshots syncer" May 27 18:30:41.602014 containerd[1535]: time="2025-05-27T18:30:41.601609035Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 18:30:41.610040 containerd[1535]: time="2025-05-27T18:30:41.609099577Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 18:30:41.610040 containerd[1535]: time="2025-05-27T18:30:41.609209583Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 18:30:41.615077 containerd[1535]: time="2025-05-27T18:30:41.614649463Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 18:30:41.615385 containerd[1535]: time="2025-05-27T18:30:41.615354444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 18:30:41.616020 containerd[1535]: time="2025-05-27T18:30:41.615956389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 18:30:41.616255 containerd[1535]: time="2025-05-27T18:30:41.616221069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 18:30:41.616295 containerd[1535]: time="2025-05-27T18:30:41.616260731Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 18:30:41.616295 containerd[1535]: time="2025-05-27T18:30:41.616282444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 18:30:41.616378 containerd[1535]: time="2025-05-27T18:30:41.616298190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 18:30:41.616378 containerd[1535]: time="2025-05-27T18:30:41.616315892Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 18:30:41.616659 containerd[1535]: time="2025-05-27T18:30:41.616630952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 18:30:41.616713 containerd[1535]: time="2025-05-27T18:30:41.616661407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 18:30:41.616713 containerd[1535]: time="2025-05-27T18:30:41.616681062Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619657323Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619717731Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619732965Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619751167Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619762409Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619792982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619815982Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619842683Z" level=info msg="runtime interface created" May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619850006Z" level=info msg="created NRI interface" May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619870427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619897858Z" level=info msg="Connect containerd service" May 27 18:30:41.620807 containerd[1535]: time="2025-05-27T18:30:41.619963623Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 18:30:41.622841 containerd[1535]: time="2025-05-27T18:30:41.620970458Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 18:30:41.628389 kernel: EDAC MC: Ver: 3.0.0 May 27 18:30:41.648279 systemd-logind[1518]: Watching system buttons on /dev/input/event2 (Power Button) May 27 18:30:41.681592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:30:41.706894 locksmithd[1543]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 18:30:41.730084 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:30:41.730358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:30:41.730884 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:30:41.733104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:30:41.736897 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:30:41.900392 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:30:41.932161 systemd-networkd[1453]: eth0: Gained IPv6LL May 27 18:30:41.936114 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 18:30:41.938508 systemd[1]: Reached target network-online.target - Network is Online. May 27 18:30:41.939586 containerd[1535]: time="2025-05-27T18:30:41.939460882Z" level=info msg="Start subscribing containerd event" May 27 18:30:41.939586 containerd[1535]: time="2025-05-27T18:30:41.939532958Z" level=info msg="Start recovering state" May 27 18:30:41.939709 containerd[1535]: time="2025-05-27T18:30:41.939692387Z" level=info msg="Start event monitor" May 27 18:30:41.939743 containerd[1535]: time="2025-05-27T18:30:41.939713396Z" level=info msg="Start cni network conf syncer for default" May 27 18:30:41.939743 containerd[1535]: time="2025-05-27T18:30:41.939732056Z" level=info msg="Start streaming server" May 27 18:30:41.939795 containerd[1535]: time="2025-05-27T18:30:41.939749953Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 18:30:41.939795 containerd[1535]: time="2025-05-27T18:30:41.939762056Z" level=info msg="runtime interface starting up..." May 27 18:30:41.939795 containerd[1535]: time="2025-05-27T18:30:41.939772302Z" level=info msg="starting plugins..." May 27 18:30:41.939795 containerd[1535]: time="2025-05-27T18:30:41.939789749Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 18:30:41.940601 containerd[1535]: time="2025-05-27T18:30:41.940559096Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 18:30:41.940692 containerd[1535]: time="2025-05-27T18:30:41.940638883Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 18:30:41.940730 containerd[1535]: time="2025-05-27T18:30:41.940703996Z" level=info msg="containerd successfully booted in 0.447948s" May 27 18:30:41.942554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:30:41.945507 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 18:30:41.947227 systemd[1]: Started containerd.service - containerd container runtime. May 27 18:30:41.987443 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 18:30:42.252321 systemd-networkd[1453]: eth1: Gained IPv6LL May 27 18:30:42.456964 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 18:30:42.464471 systemd[1]: Started sshd@0-146.190.128.44:22-139.178.68.195:51278.service - OpenSSH per-connection server daemon (139.178.68.195:51278). May 27 18:30:42.610461 sshd[1664]: Accepted publickey for core from 139.178.68.195 port 51278 ssh2: RSA SHA256:BDx/M33BokhBiX+qSaBTqBy2ZD0ak70ogCjs0cSoaGY May 27 18:30:42.613585 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:30:42.628753 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 18:30:42.631742 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 18:30:42.648660 systemd-logind[1518]: New session 1 of user core. May 27 18:30:42.671766 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 18:30:42.678713 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 18:30:42.692446 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 18:30:42.699824 systemd-logind[1518]: New session c1 of user core. May 27 18:30:42.891923 systemd[1668]: Queued start job for default target default.target. May 27 18:30:42.904189 systemd[1668]: Created slice app.slice - User Application Slice. May 27 18:30:42.904502 systemd[1668]: Reached target paths.target - Paths. May 27 18:30:42.904674 systemd[1668]: Reached target timers.target - Timers. May 27 18:30:42.907105 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 18:30:42.946571 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 18:30:42.947151 systemd[1668]: Reached target sockets.target - Sockets. May 27 18:30:42.947301 systemd[1668]: Reached target basic.target - Basic System. May 27 18:30:42.947512 systemd[1668]: Reached target default.target - Main User Target. May 27 18:30:42.947618 systemd[1668]: Startup finished in 233ms. May 27 18:30:42.947750 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 18:30:42.954300 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 18:30:43.029228 systemd[1]: Started sshd@1-146.190.128.44:22-139.178.68.195:51284.service - OpenSSH per-connection server daemon (139.178.68.195:51284). May 27 18:30:43.109105 sshd[1679]: Accepted publickey for core from 139.178.68.195 port 51284 ssh2: RSA SHA256:BDx/M33BokhBiX+qSaBTqBy2ZD0ak70ogCjs0cSoaGY May 27 18:30:43.111640 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:30:43.118227 systemd-logind[1518]: New session 2 of user core. May 27 18:30:43.125281 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 18:30:43.197086 sshd[1681]: Connection closed by 139.178.68.195 port 51284 May 27 18:30:43.199103 sshd-session[1679]: pam_unix(sshd:session): session closed for user core May 27 18:30:43.213260 systemd[1]: sshd@1-146.190.128.44:22-139.178.68.195:51284.service: Deactivated successfully. May 27 18:30:43.216600 systemd[1]: session-2.scope: Deactivated successfully. May 27 18:30:43.219357 systemd-logind[1518]: Session 2 logged out. Waiting for processes to exit. May 27 18:30:43.222709 systemd[1]: Started sshd@2-146.190.128.44:22-139.178.68.195:51296.service - OpenSSH per-connection server daemon (139.178.68.195:51296). May 27 18:30:43.228602 systemd-logind[1518]: Removed session 2. May 27 18:30:43.281189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:30:43.282166 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 18:30:43.282451 systemd[1]: Startup finished in 3.708s (kernel) + 6.407s (initrd) + 8.453s (userspace) = 18.569s. May 27 18:30:43.290567 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:30:43.296686 sshd[1687]: Accepted publickey for core from 139.178.68.195 port 51296 ssh2: RSA SHA256:BDx/M33BokhBiX+qSaBTqBy2ZD0ak70ogCjs0cSoaGY May 27 18:30:43.301868 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:30:43.332059 systemd-logind[1518]: New session 3 of user core. May 27 18:30:43.340600 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 18:30:43.407032 sshd[1699]: Connection closed by 139.178.68.195 port 51296 May 27 18:30:43.406980 sshd-session[1687]: pam_unix(sshd:session): session closed for user core May 27 18:30:43.414482 systemd[1]: sshd@2-146.190.128.44:22-139.178.68.195:51296.service: Deactivated successfully. May 27 18:30:43.418682 systemd[1]: session-3.scope: Deactivated successfully. May 27 18:30:43.420652 systemd-logind[1518]: Session 3 logged out. Waiting for processes to exit. May 27 18:30:43.422612 systemd-logind[1518]: Removed session 3. May 27 18:30:44.058561 kubelet[1694]: E0527 18:30:44.058499 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:30:44.062399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:30:44.062635 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:30:44.063465 systemd[1]: kubelet.service: Consumed 1.403s CPU time, 266.4M memory peak. May 27 18:30:53.423694 systemd[1]: Started sshd@3-146.190.128.44:22-139.178.68.195:41964.service - OpenSSH per-connection server daemon (139.178.68.195:41964). May 27 18:30:53.486679 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 41964 ssh2: RSA SHA256:BDx/M33BokhBiX+qSaBTqBy2ZD0ak70ogCjs0cSoaGY May 27 18:30:53.488506 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:30:53.495078 systemd-logind[1518]: New session 4 of user core. May 27 18:30:53.503303 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 18:30:53.565269 sshd[1713]: Connection closed by 139.178.68.195 port 41964 May 27 18:30:53.567766 sshd-session[1711]: pam_unix(sshd:session): session closed for user core May 27 18:30:53.577802 systemd[1]: sshd@3-146.190.128.44:22-139.178.68.195:41964.service: Deactivated successfully. May 27 18:30:53.580083 systemd[1]: session-4.scope: Deactivated successfully. May 27 18:30:53.582134 systemd-logind[1518]: Session 4 logged out. Waiting for processes to exit. May 27 18:30:53.585872 systemd[1]: Started sshd@4-146.190.128.44:22-139.178.68.195:39804.service - OpenSSH per-connection server daemon (139.178.68.195:39804). May 27 18:30:53.588167 systemd-logind[1518]: Removed session 4. May 27 18:30:53.653029 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 39804 ssh2: RSA SHA256:BDx/M33BokhBiX+qSaBTqBy2ZD0ak70ogCjs0cSoaGY May 27 18:30:53.654783 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:30:53.662512 systemd-logind[1518]: New session 5 of user core. May 27 18:30:53.668349 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 18:30:53.728586 sshd[1721]: Connection closed by 139.178.68.195 port 39804 May 27 18:30:53.729413 sshd-session[1719]: pam_unix(sshd:session): session closed for user core May 27 18:30:53.744117 systemd[1]: sshd@4-146.190.128.44:22-139.178.68.195:39804.service: Deactivated successfully. May 27 18:30:53.746875 systemd[1]: session-5.scope: Deactivated successfully. May 27 18:30:53.748401 systemd-logind[1518]: Session 5 logged out. Waiting for processes to exit. May 27 18:30:53.753553 systemd[1]: Started sshd@5-146.190.128.44:22-139.178.68.195:39816.service - OpenSSH per-connection server daemon (139.178.68.195:39816). May 27 18:30:53.755115 systemd-logind[1518]: Removed session 5. May 27 18:30:53.828223 sshd[1727]: Accepted publickey for core from 139.178.68.195 port 39816 ssh2: RSA SHA256:BDx/M33BokhBiX+qSaBTqBy2ZD0ak70ogCjs0cSoaGY May 27 18:30:53.830149 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:30:53.839226 systemd-logind[1518]: New session 6 of user core. May 27 18:30:53.854340 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 18:30:53.921828 sshd[1729]: Connection closed by 139.178.68.195 port 39816 May 27 18:30:53.923265 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 27 18:30:53.936288 systemd[1]: sshd@5-146.190.128.44:22-139.178.68.195:39816.service: Deactivated successfully. May 27 18:30:53.939940 systemd[1]: session-6.scope: Deactivated successfully. May 27 18:30:53.942209 systemd-logind[1518]: Session 6 logged out. Waiting for processes to exit. May 27 18:30:53.948305 systemd[1]: Started sshd@6-146.190.128.44:22-139.178.68.195:39828.service - OpenSSH per-connection server daemon (139.178.68.195:39828). May 27 18:30:53.950690 systemd-logind[1518]: Removed session 6. May 27 18:30:54.024953 sshd[1735]: Accepted publickey for core from 139.178.68.195 port 39828 ssh2: RSA SHA256:BDx/M33BokhBiX+qSaBTqBy2ZD0ak70ogCjs0cSoaGY May 27 18:30:54.026727 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:30:54.035840 systemd-logind[1518]: New session 7 of user core. May 27 18:30:54.049329 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 18:30:54.122746 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 18:30:54.123510 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:30:54.125516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 18:30:54.130815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:30:54.147302 sudo[1738]: pam_unix(sudo:session): session closed for user root May 27 18:30:54.152637 sshd[1737]: Connection closed by 139.178.68.195 port 39828 May 27 18:30:54.157318 sshd-session[1735]: pam_unix(sshd:session): session closed for user core May 27 18:30:54.168746 systemd[1]: sshd@6-146.190.128.44:22-139.178.68.195:39828.service: Deactivated successfully. May 27 18:30:54.173910 systemd[1]: session-7.scope: Deactivated successfully. May 27 18:30:54.176079 systemd-logind[1518]: Session 7 logged out. Waiting for processes to exit. May 27 18:30:54.184376 systemd[1]: Started sshd@7-146.190.128.44:22-139.178.68.195:39832.service - OpenSSH per-connection server daemon (139.178.68.195:39832). May 27 18:30:54.186726 systemd-logind[1518]: Removed session 7. May 27 18:30:54.260239 sshd[1747]: Accepted publickey for core from 139.178.68.195 port 39832 ssh2: RSA SHA256:BDx/M33BokhBiX+qSaBTqBy2ZD0ak70ogCjs0cSoaGY May 27 18:30:54.263614 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:30:54.271201 systemd-logind[1518]: New session 8 of user core. May 27 18:30:54.280331 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 18:30:54.338349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:30:54.348200 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 18:30:54.348716 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:30:54.349755 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:30:54.358272 sudo[1756]: pam_unix(sudo:session): session closed for user root May 27 18:30:54.368717 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 18:30:54.369225 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:30:54.388242 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 18:30:54.425309 kubelet[1755]: E0527 18:30:54.425195 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:30:54.431605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:30:54.431775 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:30:54.432626 systemd[1]: kubelet.service: Consumed 230ms CPU time, 110.7M memory peak. May 27 18:30:54.452738 augenrules[1786]: No rules May 27 18:30:54.454355 systemd[1]: audit-rules.service: Deactivated successfully. May 27 18:30:54.454626 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 18:30:54.456778 sudo[1754]: pam_unix(sudo:session): session closed for user root May 27 18:30:54.459754 sshd[1749]: Connection closed by 139.178.68.195 port 39832 May 27 18:30:54.460253 sshd-session[1747]: pam_unix(sshd:session): session closed for user core May 27 18:30:54.475667 systemd[1]: sshd@7-146.190.128.44:22-139.178.68.195:39832.service: Deactivated successfully. May 27 18:30:54.478341 systemd[1]: session-8.scope: Deactivated successfully. May 27 18:30:54.479650 systemd-logind[1518]: Session 8 logged out. Waiting for processes to exit. May 27 18:30:54.485399 systemd[1]: Started sshd@8-146.190.128.44:22-139.178.68.195:39842.service - OpenSSH per-connection server daemon (139.178.68.195:39842). May 27 18:30:54.486968 systemd-logind[1518]: Removed session 8. May 27 18:30:54.558040 sshd[1795]: Accepted publickey for core from 139.178.68.195 port 39842 ssh2: RSA SHA256:BDx/M33BokhBiX+qSaBTqBy2ZD0ak70ogCjs0cSoaGY May 27 18:30:54.559309 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:30:54.566292 systemd-logind[1518]: New session 9 of user core. May 27 18:30:54.574307 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 18:30:54.635502 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 18:30:54.635914 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:30:55.432327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:30:55.432508 systemd[1]: kubelet.service: Consumed 230ms CPU time, 110.7M memory peak. May 27 18:30:55.436478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:30:55.487655 systemd[1]: Reload requested from client PID 1832 ('systemctl') (unit session-9.scope)... May 27 18:30:55.487679 systemd[1]: Reloading... May 27 18:30:55.680047 zram_generator::config[1880]: No configuration found. May 27 18:30:55.811764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:30:55.995903 systemd[1]: Reloading finished in 507 ms. May 27 18:30:56.064064 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 18:30:56.064193 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 18:30:56.064594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:30:56.064666 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98.3M memory peak. May 27 18:30:56.068257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:30:56.263067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:30:56.275056 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 18:30:56.324445 kubelet[1929]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:30:56.324798 kubelet[1929]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 18:30:56.324854 kubelet[1929]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:30:56.325052 kubelet[1929]: I0527 18:30:56.325003 1929 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 18:30:57.025020 kubelet[1929]: I0527 18:30:57.024268 1929 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 18:30:57.025020 kubelet[1929]: I0527 18:30:57.024315 1929 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 18:30:57.025020 kubelet[1929]: I0527 18:30:57.024697 1929 server.go:956] "Client rotation is on, will bootstrap in background" May 27 18:30:57.064016 kubelet[1929]: I0527 18:30:57.063950 1929 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 18:30:57.074481 kubelet[1929]: I0527 18:30:57.074444 1929 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 18:30:57.079257 kubelet[1929]: I0527 18:30:57.079217 1929 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 18:30:57.079727 kubelet[1929]: I0527 18:30:57.079671 1929 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 18:30:57.080009 kubelet[1929]: I0527 18:30:57.079811 1929 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"146.190.128.44","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 18:30:57.080250 kubelet[1929]: I0527 18:30:57.080231 1929 topology_manager.go:138] "Creating topology manager with none policy" May 27 18:30:57.080330 kubelet[1929]: I0527 18:30:57.080317 1929 container_manager_linux.go:303] "Creating device plugin manager" May 27 18:30:57.081723 kubelet[1929]: I0527 18:30:57.081688 1929 state_mem.go:36] "Initialized new in-memory state store" May 27 18:30:57.086222 kubelet[1929]: I0527 18:30:57.085922 1929 kubelet.go:480] "Attempting to sync node with API server" May 27 18:30:57.086222 kubelet[1929]: I0527 18:30:57.086043 1929 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 18:30:57.086222 kubelet[1929]: I0527 18:30:57.086101 1929 kubelet.go:386] "Adding apiserver pod source" May 27 18:30:57.088378 kubelet[1929]: I0527 18:30:57.088329 1929 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 18:30:57.093974 kubelet[1929]: E0527 18:30:57.093921 1929 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:30:57.094225 kubelet[1929]: E0527 18:30:57.094208 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:30:57.096365 kubelet[1929]: I0527 18:30:57.096108 1929 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 18:30:57.096788 kubelet[1929]: I0527 18:30:57.096755 1929 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 18:30:57.097514 kubelet[1929]: W0527 18:30:57.097451 1929 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 18:30:57.101033 kubelet[1929]: I0527 18:30:57.100964 1929 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 18:30:57.101192 kubelet[1929]: I0527 18:30:57.101091 1929 server.go:1289] "Started kubelet" May 27 18:30:57.103010 kubelet[1929]: I0527 18:30:57.102859 1929 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 18:30:57.105014 kubelet[1929]: I0527 18:30:57.104248 1929 server.go:317] "Adding debug handlers to kubelet server" May 27 18:30:57.108018 kubelet[1929]: I0527 18:30:57.107891 1929 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 18:30:57.108523 kubelet[1929]: I0527 18:30:57.108485 1929 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 18:30:57.111516 kubelet[1929]: I0527 18:30:57.111488 1929 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 18:30:57.114719 kubelet[1929]: I0527 18:30:57.114643 1929 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 18:30:57.119440 kubelet[1929]: E0527 18:30:57.117866 1929 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.128.44.184375d970394307 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.128.44,UID:146.190.128.44,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:146.190.128.44,},FirstTimestamp:2025-05-27 18:30:57.101038343 +0000 UTC m=+0.820241407,LastTimestamp:2025-05-27 18:30:57.101038343 +0000 UTC m=+0.820241407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.128.44,}" May 27 18:30:57.119675 kubelet[1929]: E0527 18:30:57.119612 1929 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"146.190.128.44\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 18:30:57.119722 kubelet[1929]: E0527 18:30:57.119699 1929 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 18:30:57.121954 kubelet[1929]: I0527 18:30:57.121915 1929 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 18:30:57.122586 kubelet[1929]: E0527 18:30:57.122556 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:57.123842 kubelet[1929]: I0527 18:30:57.123813 1929 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 18:30:57.124184 kubelet[1929]: I0527 18:30:57.124168 1929 reconciler.go:26] "Reconciler: start to sync state" May 27 18:30:57.126909 kubelet[1929]: I0527 18:30:57.126879 1929 factory.go:223] Registration of the systemd container factory successfully May 27 18:30:57.127784 kubelet[1929]: I0527 18:30:57.127754 1929 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 18:30:57.131178 kubelet[1929]: E0527 18:30:57.131141 1929 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"146.190.128.44\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 27 18:30:57.131458 kubelet[1929]: E0527 18:30:57.131432 1929 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 18:30:57.133022 kubelet[1929]: I0527 18:30:57.131723 1929 factory.go:223] Registration of the containerd container factory successfully May 27 18:30:57.165974 kubelet[1929]: I0527 18:30:57.165941 1929 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 18:30:57.165974 kubelet[1929]: I0527 18:30:57.165970 1929 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 18:30:57.166160 kubelet[1929]: I0527 18:30:57.166006 1929 state_mem.go:36] "Initialized new in-memory state store" May 27 18:30:57.168115 kubelet[1929]: I0527 18:30:57.168076 1929 policy_none.go:49] "None policy: Start" May 27 18:30:57.168115 kubelet[1929]: I0527 18:30:57.168114 1929 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 18:30:57.168347 kubelet[1929]: I0527 18:30:57.168135 1929 state_mem.go:35] "Initializing new in-memory state store" May 27 18:30:57.179025 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 18:30:57.195512 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 18:30:57.201857 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 18:30:57.213353 kubelet[1929]: E0527 18:30:57.212176 1929 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 18:30:57.213353 kubelet[1929]: I0527 18:30:57.212507 1929 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 18:30:57.213353 kubelet[1929]: I0527 18:30:57.212528 1929 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 18:30:57.214805 kubelet[1929]: I0527 18:30:57.214069 1929 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 18:30:57.221234 kubelet[1929]: E0527 18:30:57.221117 1929 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 18:30:57.221419 kubelet[1929]: E0527 18:30:57.221246 1929 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"146.190.128.44\" not found" May 27 18:30:57.246777 kubelet[1929]: I0527 18:30:57.246678 1929 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 18:30:57.250285 kubelet[1929]: I0527 18:30:57.250230 1929 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 18:30:57.251055 kubelet[1929]: I0527 18:30:57.250397 1929 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 18:30:57.251055 kubelet[1929]: I0527 18:30:57.250431 1929 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 18:30:57.251055 kubelet[1929]: I0527 18:30:57.250440 1929 kubelet.go:2436] "Starting kubelet main sync loop" May 27 18:30:57.251055 kubelet[1929]: E0527 18:30:57.250580 1929 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 27 18:30:57.314572 kubelet[1929]: I0527 18:30:57.314432 1929 kubelet_node_status.go:75] "Attempting to register node" node="146.190.128.44" May 27 18:30:57.324313 kubelet[1929]: I0527 18:30:57.324265 1929 kubelet_node_status.go:78] "Successfully registered node" node="146.190.128.44" May 27 18:30:57.324760 kubelet[1929]: E0527 18:30:57.324542 1929 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"146.190.128.44\": node \"146.190.128.44\" not found" May 27 18:30:57.365948 kubelet[1929]: E0527 18:30:57.365901 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:57.466707 kubelet[1929]: E0527 18:30:57.466643 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:57.567972 kubelet[1929]: E0527 18:30:57.567828 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:57.668121 kubelet[1929]: E0527 18:30:57.668054 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:57.769034 kubelet[1929]: E0527 18:30:57.768946 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:57.799832 sudo[1798]: pam_unix(sudo:session): session closed for user root May 27 18:30:57.803250 sshd[1797]: Connection closed by 139.178.68.195 port 39842 May 27 18:30:57.803916 sshd-session[1795]: pam_unix(sshd:session): session closed for user core May 27 18:30:57.811413 systemd[1]: sshd@8-146.190.128.44:22-139.178.68.195:39842.service: Deactivated successfully. May 27 18:30:57.815266 systemd[1]: session-9.scope: Deactivated successfully. May 27 18:30:57.815678 systemd[1]: session-9.scope: Consumed 665ms CPU time, 76M memory peak. May 27 18:30:57.818423 systemd-logind[1518]: Session 9 logged out. Waiting for processes to exit. May 27 18:30:57.822353 systemd-logind[1518]: Removed session 9. May 27 18:30:57.869751 kubelet[1929]: E0527 18:30:57.869673 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:57.970901 kubelet[1929]: E0527 18:30:57.970820 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:58.027433 kubelet[1929]: I0527 18:30:58.027327 1929 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 27 18:30:58.027678 kubelet[1929]: I0527 18:30:58.027617 1929 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" May 27 18:30:58.071194 kubelet[1929]: E0527 18:30:58.071036 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:58.095331 kubelet[1929]: E0527 18:30:58.095267 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:30:58.171621 kubelet[1929]: E0527 18:30:58.171540 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:58.271756 kubelet[1929]: E0527 18:30:58.271684 1929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"146.190.128.44\" not found" May 27 18:30:58.374206 kubelet[1929]: I0527 18:30:58.373955 1929 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 27 18:30:58.375055 kubelet[1929]: I0527 18:30:58.375036 1929 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 27 18:30:58.375138 containerd[1535]: time="2025-05-27T18:30:58.374713520Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 18:30:59.095879 kubelet[1929]: I0527 18:30:59.095532 1929 apiserver.go:52] "Watching apiserver" May 27 18:30:59.096195 kubelet[1929]: E0527 18:30:59.096092 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:30:59.104021 kubelet[1929]: E0527 18:30:59.103938 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njssr" podUID="a276bf69-1dea-406f-9796-048db395c71a" May 27 18:30:59.119495 systemd[1]: Created slice kubepods-besteffort-podc3e007d8_f89f_4fa9_a254_fdb71882c88d.slice - libcontainer container kubepods-besteffort-podc3e007d8_f89f_4fa9_a254_fdb71882c88d.slice. May 27 18:30:59.124429 kubelet[1929]: I0527 18:30:59.124356 1929 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 18:30:59.140010 kubelet[1929]: I0527 18:30:59.139938 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjkrv\" (UniqueName: \"kubernetes.io/projected/a276bf69-1dea-406f-9796-048db395c71a-kube-api-access-tjkrv\") pod \"csi-node-driver-njssr\" (UID: \"a276bf69-1dea-406f-9796-048db395c71a\") " pod="calico-system/csi-node-driver-njssr" May 27 18:30:59.142016 kubelet[1929]: I0527 18:30:59.140281 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7bhq\" (UniqueName: \"kubernetes.io/projected/c3e007d8-f89f-4fa9-a254-fdb71882c88d-kube-api-access-r7bhq\") pod \"kube-proxy-j5whj\" (UID: \"c3e007d8-f89f-4fa9-a254-fdb71882c88d\") " pod="kube-system/kube-proxy-j5whj" May 27 18:30:59.142016 kubelet[1929]: I0527 18:30:59.140339 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1d17214-e399-4a18-b2b1-5800394606c5-var-lib-calico\") pod \"tigera-operator-844669ff44-4spkm\" (UID: \"d1d17214-e399-4a18-b2b1-5800394606c5\") " pod="tigera-operator/tigera-operator-844669ff44-4spkm" May 27 18:30:59.142016 kubelet[1929]: I0527 18:30:59.140372 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9hhz\" (UniqueName: \"kubernetes.io/projected/d1d17214-e399-4a18-b2b1-5800394606c5-kube-api-access-s9hhz\") pod \"tigera-operator-844669ff44-4spkm\" (UID: \"d1d17214-e399-4a18-b2b1-5800394606c5\") " pod="tigera-operator/tigera-operator-844669ff44-4spkm" May 27 18:30:59.142016 kubelet[1929]: I0527 18:30:59.140423 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0354d057-669e-447e-bc87-8ef47564b3d5-cni-log-dir\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142016 kubelet[1929]: I0527 18:30:59.140480 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr89f\" (UniqueName: \"kubernetes.io/projected/0354d057-669e-447e-bc87-8ef47564b3d5-kube-api-access-wr89f\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142358 kubelet[1929]: I0527 18:30:59.140523 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0354d057-669e-447e-bc87-8ef47564b3d5-cni-bin-dir\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142358 kubelet[1929]: I0527 18:30:59.140555 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0354d057-669e-447e-bc87-8ef47564b3d5-cni-net-dir\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142358 kubelet[1929]: I0527 18:30:59.140584 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0354d057-669e-447e-bc87-8ef47564b3d5-lib-modules\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142358 kubelet[1929]: I0527 18:30:59.140631 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0354d057-669e-447e-bc87-8ef47564b3d5-node-certs\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142358 kubelet[1929]: I0527 18:30:59.140691 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0354d057-669e-447e-bc87-8ef47564b3d5-var-lib-calico\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142549 kubelet[1929]: I0527 18:30:59.140716 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a276bf69-1dea-406f-9796-048db395c71a-varrun\") pod \"csi-node-driver-njssr\" (UID: \"a276bf69-1dea-406f-9796-048db395c71a\") " pod="calico-system/csi-node-driver-njssr" May 27 18:30:59.142549 kubelet[1929]: I0527 18:30:59.140758 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0354d057-669e-447e-bc87-8ef47564b3d5-flexvol-driver-host\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142549 kubelet[1929]: I0527 18:30:59.140786 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0354d057-669e-447e-bc87-8ef47564b3d5-policysync\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142549 kubelet[1929]: I0527 18:30:59.140814 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0354d057-669e-447e-bc87-8ef47564b3d5-tigera-ca-bundle\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142549 kubelet[1929]: I0527 18:30:59.140842 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0354d057-669e-447e-bc87-8ef47564b3d5-var-run-calico\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142727 kubelet[1929]: I0527 18:30:59.140876 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0354d057-669e-447e-bc87-8ef47564b3d5-xtables-lock\") pod \"calico-node-8rkk7\" (UID: \"0354d057-669e-447e-bc87-8ef47564b3d5\") " pod="calico-system/calico-node-8rkk7" May 27 18:30:59.142727 kubelet[1929]: I0527 18:30:59.140931 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a276bf69-1dea-406f-9796-048db395c71a-registration-dir\") pod \"csi-node-driver-njssr\" (UID: \"a276bf69-1dea-406f-9796-048db395c71a\") " pod="calico-system/csi-node-driver-njssr" May 27 18:30:59.142727 kubelet[1929]: I0527 18:30:59.140972 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3e007d8-f89f-4fa9-a254-fdb71882c88d-kube-proxy\") pod \"kube-proxy-j5whj\" (UID: \"c3e007d8-f89f-4fa9-a254-fdb71882c88d\") " pod="kube-system/kube-proxy-j5whj" May 27 18:30:59.142727 kubelet[1929]: I0527 18:30:59.141012 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3e007d8-f89f-4fa9-a254-fdb71882c88d-xtables-lock\") pod \"kube-proxy-j5whj\" (UID: \"c3e007d8-f89f-4fa9-a254-fdb71882c88d\") " pod="kube-system/kube-proxy-j5whj" May 27 18:30:59.142727 kubelet[1929]: I0527 18:30:59.141057 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e007d8-f89f-4fa9-a254-fdb71882c88d-lib-modules\") pod \"kube-proxy-j5whj\" (UID: \"c3e007d8-f89f-4fa9-a254-fdb71882c88d\") " pod="kube-system/kube-proxy-j5whj" May 27 18:30:59.142915 kubelet[1929]: I0527 18:30:59.141087 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a276bf69-1dea-406f-9796-048db395c71a-kubelet-dir\") pod \"csi-node-driver-njssr\" (UID: \"a276bf69-1dea-406f-9796-048db395c71a\") " pod="calico-system/csi-node-driver-njssr" May 27 18:30:59.142915 kubelet[1929]: I0527 18:30:59.141110 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a276bf69-1dea-406f-9796-048db395c71a-socket-dir\") pod \"csi-node-driver-njssr\" (UID: \"a276bf69-1dea-406f-9796-048db395c71a\") " pod="calico-system/csi-node-driver-njssr" May 27 18:30:59.148115 systemd[1]: Created slice kubepods-besteffort-pod0354d057_669e_447e_bc87_8ef47564b3d5.slice - libcontainer container kubepods-besteffort-pod0354d057_669e_447e_bc87_8ef47564b3d5.slice. May 27 18:30:59.162345 systemd[1]: Created slice kubepods-besteffort-podd1d17214_e399_4a18_b2b1_5800394606c5.slice - libcontainer container kubepods-besteffort-podd1d17214_e399_4a18_b2b1_5800394606c5.slice. May 27 18:30:59.246527 kubelet[1929]: E0527 18:30:59.246484 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:30:59.246527 kubelet[1929]: W0527 18:30:59.246513 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:30:59.246731 kubelet[1929]: E0527 18:30:59.246551 1929 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:30:59.253332 kubelet[1929]: E0527 18:30:59.253295 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:30:59.253525 kubelet[1929]: W0527 18:30:59.253505 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:30:59.253645 kubelet[1929]: E0527 18:30:59.253628 1929 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:30:59.286770 kubelet[1929]: E0527 18:30:59.286736 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:30:59.287045 kubelet[1929]: W0527 18:30:59.286980 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:30:59.287045 kubelet[1929]: E0527 18:30:59.287019 1929 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:30:59.297368 kubelet[1929]: E0527 18:30:59.297305 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:30:59.297658 kubelet[1929]: W0527 18:30:59.297340 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:30:59.297658 kubelet[1929]: E0527 18:30:59.297624 1929 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:30:59.304918 kubelet[1929]: E0527 18:30:59.304810 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:30:59.304918 kubelet[1929]: W0527 18:30:59.304839 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:30:59.304918 kubelet[1929]: E0527 18:30:59.304865 1929 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:30:59.314300 kubelet[1929]: E0527 18:30:59.314222 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:30:59.314592 kubelet[1929]: W0527 18:30:59.314489 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:30:59.314592 kubelet[1929]: E0527 18:30:59.314528 1929 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:30:59.432159 kubelet[1929]: E0527 18:30:59.431908 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:30:59.434854 containerd[1535]: time="2025-05-27T18:30:59.434787877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j5whj,Uid:c3e007d8-f89f-4fa9-a254-fdb71882c88d,Namespace:kube-system,Attempt:0,}" May 27 18:30:59.453123 containerd[1535]: time="2025-05-27T18:30:59.453069113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8rkk7,Uid:0354d057-669e-447e-bc87-8ef47564b3d5,Namespace:calico-system,Attempt:0,}" May 27 18:30:59.467553 containerd[1535]: time="2025-05-27T18:30:59.467211982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-4spkm,Uid:d1d17214-e399-4a18-b2b1-5800394606c5,Namespace:tigera-operator,Attempt:0,}" May 27 18:30:59.980578 containerd[1535]: time="2025-05-27T18:30:59.980089845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 18:30:59.981044 containerd[1535]: time="2025-05-27T18:30:59.980965707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:30:59.981696 containerd[1535]: time="2025-05-27T18:30:59.981589801Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:30:59.982629 containerd[1535]: time="2025-05-27T18:30:59.982586795Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:30:59.983648 containerd[1535]: time="2025-05-27T18:30:59.983600943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 27 18:30:59.986011 containerd[1535]: time="2025-05-27T18:30:59.984636846Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:30:59.986011 containerd[1535]: time="2025-05-27T18:30:59.985359204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:30:59.986011 containerd[1535]: time="2025-05-27T18:30:59.985720466Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 27 18:30:59.988787 containerd[1535]: time="2025-05-27T18:30:59.988718320Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 548.551741ms" May 27 18:30:59.989767 containerd[1535]: time="2025-05-27T18:30:59.989724254Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 517.154305ms" May 27 18:30:59.993697 containerd[1535]: time="2025-05-27T18:30:59.993634641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 536.260575ms" May 27 18:31:00.026140 containerd[1535]: time="2025-05-27T18:31:00.026068518Z" level=info msg="connecting to shim 6c324e209b917b158d815b88d1c6953ba08f6de403f912e3527e8d7282cb8bb6" address="unix:///run/containerd/s/cdd472f6d3dc1bf173a19d8f8c91958e617ea717127745255379fb2ac3f70dc9" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:00.036014 containerd[1535]: time="2025-05-27T18:31:00.035940855Z" level=info msg="connecting to shim 7695ac7914951cd7297c97f48cce8c44c67bfef9a50dff20ef1155c82026e893" address="unix:///run/containerd/s/27fa9f106177f276b59b2269617bfd0796e014d2d3846dbcacafae222a252f34" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:00.043325 containerd[1535]: time="2025-05-27T18:31:00.043257982Z" level=info msg="connecting to shim 29921660a2b4e777662cf5f5278ce8d68188f6c863ce94d5ca2fb1dadd52648b" address="unix:///run/containerd/s/25c340e23284caf67094654080ac02dd4b3657a3771b2e4f332f3427e8dfcecb" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:00.096247 kubelet[1929]: E0527 18:31:00.096200 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:00.096580 systemd[1]: Started cri-containerd-7695ac7914951cd7297c97f48cce8c44c67bfef9a50dff20ef1155c82026e893.scope - libcontainer container 7695ac7914951cd7297c97f48cce8c44c67bfef9a50dff20ef1155c82026e893. May 27 18:31:00.117320 systemd[1]: Started cri-containerd-6c324e209b917b158d815b88d1c6953ba08f6de403f912e3527e8d7282cb8bb6.scope - libcontainer container 6c324e209b917b158d815b88d1c6953ba08f6de403f912e3527e8d7282cb8bb6. May 27 18:31:00.129106 systemd[1]: Started cri-containerd-29921660a2b4e777662cf5f5278ce8d68188f6c863ce94d5ca2fb1dadd52648b.scope - libcontainer container 29921660a2b4e777662cf5f5278ce8d68188f6c863ce94d5ca2fb1dadd52648b. May 27 18:31:00.215456 containerd[1535]: time="2025-05-27T18:31:00.215296553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8rkk7,Uid:0354d057-669e-447e-bc87-8ef47564b3d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"7695ac7914951cd7297c97f48cce8c44c67bfef9a50dff20ef1155c82026e893\"" May 27 18:31:00.224276 containerd[1535]: time="2025-05-27T18:31:00.224231159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 18:31:00.234607 containerd[1535]: time="2025-05-27T18:31:00.233853686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j5whj,Uid:c3e007d8-f89f-4fa9-a254-fdb71882c88d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c324e209b917b158d815b88d1c6953ba08f6de403f912e3527e8d7282cb8bb6\"" May 27 18:31:00.236946 kubelet[1929]: E0527 18:31:00.236672 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:31:00.251848 kubelet[1929]: E0527 18:31:00.251549 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njssr" podUID="a276bf69-1dea-406f-9796-048db395c71a" May 27 18:31:00.304952 containerd[1535]: time="2025-05-27T18:31:00.304885478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-4spkm,Uid:d1d17214-e399-4a18-b2b1-5800394606c5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"29921660a2b4e777662cf5f5278ce8d68188f6c863ce94d5ca2fb1dadd52648b\"" May 27 18:31:01.097212 kubelet[1929]: E0527 18:31:01.097098 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:01.585370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061541460.mount: Deactivated successfully. May 27 18:31:01.718394 containerd[1535]: time="2025-05-27T18:31:01.717479585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:01.720825 containerd[1535]: time="2025-05-27T18:31:01.720737360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5934460" May 27 18:31:01.722696 containerd[1535]: time="2025-05-27T18:31:01.722585807Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:01.727436 containerd[1535]: time="2025-05-27T18:31:01.727359919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:01.728682 containerd[1535]: time="2025-05-27T18:31:01.728615100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.504104041s" May 27 18:31:01.728909 containerd[1535]: time="2025-05-27T18:31:01.728874910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 18:31:01.731077 containerd[1535]: time="2025-05-27T18:31:01.731026763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 18:31:01.734575 containerd[1535]: time="2025-05-27T18:31:01.734524564Z" level=info msg="CreateContainer within sandbox \"7695ac7914951cd7297c97f48cce8c44c67bfef9a50dff20ef1155c82026e893\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 18:31:01.751015 containerd[1535]: time="2025-05-27T18:31:01.750329774Z" level=info msg="Container 6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:01.758179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904129933.mount: Deactivated successfully. May 27 18:31:01.770611 containerd[1535]: time="2025-05-27T18:31:01.770524920Z" level=info msg="CreateContainer within sandbox \"7695ac7914951cd7297c97f48cce8c44c67bfef9a50dff20ef1155c82026e893\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296\"" May 27 18:31:01.774533 containerd[1535]: time="2025-05-27T18:31:01.772089761Z" level=info msg="StartContainer for \"6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296\"" May 27 18:31:01.777998 containerd[1535]: time="2025-05-27T18:31:01.777916344Z" level=info msg="connecting to shim 6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296" address="unix:///run/containerd/s/27fa9f106177f276b59b2269617bfd0796e014d2d3846dbcacafae222a252f34" protocol=ttrpc version=3 May 27 18:31:01.828843 systemd[1]: Started cri-containerd-6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296.scope - libcontainer container 6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296. May 27 18:31:01.910820 containerd[1535]: time="2025-05-27T18:31:01.909132704Z" level=info msg="StartContainer for \"6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296\" returns successfully" May 27 18:31:01.926480 systemd[1]: cri-containerd-6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296.scope: Deactivated successfully. May 27 18:31:01.934970 containerd[1535]: time="2025-05-27T18:31:01.934722373Z" level=info msg="received exit event container_id:\"6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296\" id:\"6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296\" pid:2148 exited_at:{seconds:1748370661 nanos:934061261}" May 27 18:31:01.938531 containerd[1535]: time="2025-05-27T18:31:01.935297914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296\" id:\"6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296\" pid:2148 exited_at:{seconds:1748370661 nanos:934061261}" May 27 18:31:02.097654 kubelet[1929]: E0527 18:31:02.097418 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:02.252485 kubelet[1929]: E0527 18:31:02.251592 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njssr" podUID="a276bf69-1dea-406f-9796-048db395c71a" May 27 18:31:02.527258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e6f9e32d0fd5aedd1de5fa147e77a7b34efe2d9f63953dadea93dd59875b296-rootfs.mount: Deactivated successfully. May 27 18:31:03.101771 kubelet[1929]: E0527 18:31:03.097943 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:03.138137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354108267.mount: Deactivated successfully. May 27 18:31:03.788157 containerd[1535]: time="2025-05-27T18:31:03.788103903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:03.791171 containerd[1535]: time="2025-05-27T18:31:03.791087374Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 18:31:03.792273 containerd[1535]: time="2025-05-27T18:31:03.792120098Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:03.794933 containerd[1535]: time="2025-05-27T18:31:03.794839998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:03.795865 containerd[1535]: time="2025-05-27T18:31:03.795226012Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 2.064154333s" May 27 18:31:03.795865 containerd[1535]: time="2025-05-27T18:31:03.795265640Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 18:31:03.797295 containerd[1535]: time="2025-05-27T18:31:03.797256692Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 18:31:03.801108 containerd[1535]: time="2025-05-27T18:31:03.801061958Z" level=info msg="CreateContainer within sandbox \"6c324e209b917b158d815b88d1c6953ba08f6de403f912e3527e8d7282cb8bb6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 18:31:03.818123 containerd[1535]: time="2025-05-27T18:31:03.818056035Z" level=info msg="Container 92aa250cd28e7034027712d9fb81763a61ec3eb576f6c9910d499a4746e88f48: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:03.825854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611670726.mount: Deactivated successfully. May 27 18:31:03.841438 containerd[1535]: time="2025-05-27T18:31:03.841352299Z" level=info msg="CreateContainer within sandbox \"6c324e209b917b158d815b88d1c6953ba08f6de403f912e3527e8d7282cb8bb6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92aa250cd28e7034027712d9fb81763a61ec3eb576f6c9910d499a4746e88f48\"" May 27 18:31:03.843119 containerd[1535]: time="2025-05-27T18:31:03.842237276Z" level=info msg="StartContainer for \"92aa250cd28e7034027712d9fb81763a61ec3eb576f6c9910d499a4746e88f48\"" May 27 18:31:03.844144 containerd[1535]: time="2025-05-27T18:31:03.844106892Z" level=info msg="connecting to shim 92aa250cd28e7034027712d9fb81763a61ec3eb576f6c9910d499a4746e88f48" address="unix:///run/containerd/s/cdd472f6d3dc1bf173a19d8f8c91958e617ea717127745255379fb2ac3f70dc9" protocol=ttrpc version=3 May 27 18:31:03.888391 systemd[1]: Started cri-containerd-92aa250cd28e7034027712d9fb81763a61ec3eb576f6c9910d499a4746e88f48.scope - libcontainer container 92aa250cd28e7034027712d9fb81763a61ec3eb576f6c9910d499a4746e88f48. May 27 18:31:03.958815 containerd[1535]: time="2025-05-27T18:31:03.958569389Z" level=info msg="StartContainer for \"92aa250cd28e7034027712d9fb81763a61ec3eb576f6c9910d499a4746e88f48\" returns successfully" May 27 18:31:04.103689 kubelet[1929]: E0527 18:31:04.103528 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:04.251812 kubelet[1929]: E0527 18:31:04.251738 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njssr" podUID="a276bf69-1dea-406f-9796-048db395c71a" May 27 18:31:04.329257 kubelet[1929]: E0527 18:31:04.329221 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:31:04.344285 kubelet[1929]: I0527 18:31:04.344196 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j5whj" podStartSLOduration=3.784738388 podStartE2EDuration="7.34417655s" podCreationTimestamp="2025-05-27 18:30:57 +0000 UTC" firstStartedPulling="2025-05-27 18:31:00.237655227 +0000 UTC m=+3.956858220" lastFinishedPulling="2025-05-27 18:31:03.797093378 +0000 UTC m=+7.516296382" observedRunningTime="2025-05-27 18:31:04.343726515 +0000 UTC m=+8.062929537" watchObservedRunningTime="2025-05-27 18:31:04.34417655 +0000 UTC m=+8.063379561" May 27 18:31:05.103792 kubelet[1929]: E0527 18:31:05.103718 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:05.317367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1503651561.mount: Deactivated successfully. May 27 18:31:05.330817 kubelet[1929]: E0527 18:31:05.330780 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:31:05.612413 systemd-resolved[1401]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 27 18:31:06.104943 kubelet[1929]: E0527 18:31:06.104896 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:06.144683 containerd[1535]: time="2025-05-27T18:31:06.143839056Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:06.145474 containerd[1535]: time="2025-05-27T18:31:06.145430019Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 18:31:06.146512 containerd[1535]: time="2025-05-27T18:31:06.146470271Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:06.148736 containerd[1535]: time="2025-05-27T18:31:06.148695829Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:06.149826 containerd[1535]: time="2025-05-27T18:31:06.149780655Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.352473964s" May 27 18:31:06.150042 containerd[1535]: time="2025-05-27T18:31:06.150018420Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 18:31:06.152005 containerd[1535]: time="2025-05-27T18:31:06.151293300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 18:31:06.155079 containerd[1535]: time="2025-05-27T18:31:06.155036164Z" level=info msg="CreateContainer within sandbox \"29921660a2b4e777662cf5f5278ce8d68188f6c863ce94d5ca2fb1dadd52648b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 18:31:06.163706 containerd[1535]: time="2025-05-27T18:31:06.163662344Z" level=info msg="Container 96caa4029dd55c117f70bc9b2f7c4816a06ee53f8b2036cd4b1764b71e7cbba1: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:06.166367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3337757846.mount: Deactivated successfully. May 27 18:31:06.175722 containerd[1535]: time="2025-05-27T18:31:06.175315372Z" level=info msg="CreateContainer within sandbox \"29921660a2b4e777662cf5f5278ce8d68188f6c863ce94d5ca2fb1dadd52648b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"96caa4029dd55c117f70bc9b2f7c4816a06ee53f8b2036cd4b1764b71e7cbba1\"" May 27 18:31:06.177031 containerd[1535]: time="2025-05-27T18:31:06.176272225Z" level=info msg="StartContainer for \"96caa4029dd55c117f70bc9b2f7c4816a06ee53f8b2036cd4b1764b71e7cbba1\"" May 27 18:31:06.177644 containerd[1535]: time="2025-05-27T18:31:06.177500003Z" level=info msg="connecting to shim 96caa4029dd55c117f70bc9b2f7c4816a06ee53f8b2036cd4b1764b71e7cbba1" address="unix:///run/containerd/s/25c340e23284caf67094654080ac02dd4b3657a3771b2e4f332f3427e8dfcecb" protocol=ttrpc version=3 May 27 18:31:06.220329 systemd[1]: Started cri-containerd-96caa4029dd55c117f70bc9b2f7c4816a06ee53f8b2036cd4b1764b71e7cbba1.scope - libcontainer container 96caa4029dd55c117f70bc9b2f7c4816a06ee53f8b2036cd4b1764b71e7cbba1. May 27 18:31:06.253009 kubelet[1929]: E0527 18:31:06.252116 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njssr" podUID="a276bf69-1dea-406f-9796-048db395c71a" May 27 18:31:06.274429 containerd[1535]: time="2025-05-27T18:31:06.274380904Z" level=info msg="StartContainer for \"96caa4029dd55c117f70bc9b2f7c4816a06ee53f8b2036cd4b1764b71e7cbba1\" returns successfully" May 27 18:31:06.350013 kubelet[1929]: I0527 18:31:06.349854 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-4spkm" podStartSLOduration=3.5068693570000002 podStartE2EDuration="9.349831252s" podCreationTimestamp="2025-05-27 18:30:57 +0000 UTC" firstStartedPulling="2025-05-27 18:31:00.308064249 +0000 UTC m=+4.027267254" lastFinishedPulling="2025-05-27 18:31:06.15102614 +0000 UTC m=+9.870229149" observedRunningTime="2025-05-27 18:31:06.349418104 +0000 UTC m=+10.068621184" watchObservedRunningTime="2025-05-27 18:31:06.349831252 +0000 UTC m=+10.069034264" May 27 18:31:07.106598 kubelet[1929]: E0527 18:31:07.106504 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:08.107660 kubelet[1929]: E0527 18:31:08.107602 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:08.250921 kubelet[1929]: E0527 18:31:08.250853 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njssr" podUID="a276bf69-1dea-406f-9796-048db395c71a" May 27 18:31:08.684366 systemd-resolved[1401]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 27 18:31:09.108099 kubelet[1929]: E0527 18:31:09.108044 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:10.109912 kubelet[1929]: E0527 18:31:10.109740 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:10.237034 containerd[1535]: time="2025-05-27T18:31:10.236937685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:10.238440 containerd[1535]: time="2025-05-27T18:31:10.238241681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 18:31:10.239214 containerd[1535]: time="2025-05-27T18:31:10.239167802Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:10.241799 containerd[1535]: time="2025-05-27T18:31:10.241712459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:10.242979 containerd[1535]: time="2025-05-27T18:31:10.242677502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.091345773s" May 27 18:31:10.242979 containerd[1535]: time="2025-05-27T18:31:10.242723901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 18:31:10.248028 containerd[1535]: time="2025-05-27T18:31:10.247930937Z" level=info msg="CreateContainer within sandbox \"7695ac7914951cd7297c97f48cce8c44c67bfef9a50dff20ef1155c82026e893\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 18:31:10.251431 kubelet[1929]: E0527 18:31:10.251002 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njssr" podUID="a276bf69-1dea-406f-9796-048db395c71a" May 27 18:31:10.257235 containerd[1535]: time="2025-05-27T18:31:10.257167511Z" level=info msg="Container d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:10.277652 containerd[1535]: time="2025-05-27T18:31:10.277401104Z" level=info msg="CreateContainer within sandbox \"7695ac7914951cd7297c97f48cce8c44c67bfef9a50dff20ef1155c82026e893\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7\"" May 27 18:31:10.279101 containerd[1535]: time="2025-05-27T18:31:10.279017166Z" level=info msg="StartContainer for \"d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7\"" May 27 18:31:10.284024 containerd[1535]: time="2025-05-27T18:31:10.283631582Z" level=info msg="connecting to shim d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7" address="unix:///run/containerd/s/27fa9f106177f276b59b2269617bfd0796e014d2d3846dbcacafae222a252f34" protocol=ttrpc version=3 May 27 18:31:10.334406 systemd[1]: Started cri-containerd-d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7.scope - libcontainer container d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7. May 27 18:31:10.412353 containerd[1535]: time="2025-05-27T18:31:10.411332882Z" level=info msg="StartContainer for \"d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7\" returns successfully" May 27 18:31:11.111731 kubelet[1929]: E0527 18:31:11.111676 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:11.202223 containerd[1535]: time="2025-05-27T18:31:11.202149984Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 18:31:11.206350 systemd[1]: cri-containerd-d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7.scope: Deactivated successfully. May 27 18:31:11.206868 systemd[1]: cri-containerd-d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7.scope: Consumed 900ms CPU time, 191.6M memory peak, 170.9M written to disk. May 27 18:31:11.211586 containerd[1535]: time="2025-05-27T18:31:11.211523778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7\" id:\"d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7\" pid:2420 exited_at:{seconds:1748370671 nanos:210549432}" May 27 18:31:11.211765 containerd[1535]: time="2025-05-27T18:31:11.211586290Z" level=info msg="received exit event container_id:\"d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7\" id:\"d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7\" pid:2420 exited_at:{seconds:1748370671 nanos:210549432}" May 27 18:31:11.247356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d74552947880bc97203e5ef8c84f6f3a42c771163dcd84a79ab24a1c7ebd02e7-rootfs.mount: Deactivated successfully. May 27 18:31:11.268027 kubelet[1929]: I0527 18:31:11.267822 1929 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 18:31:11.337019 systemd[1]: Created slice kubepods-besteffort-pod87f3a823_c410_4d8f_a90d_b0b1b3a0b283.slice - libcontainer container kubepods-besteffort-pod87f3a823_c410_4d8f_a90d_b0b1b3a0b283.slice. May 27 18:31:11.345324 systemd[1]: Created slice kubepods-besteffort-pod505f6c41_f4b2_4935_af06_1d5c5642e9d7.slice - libcontainer container kubepods-besteffort-pod505f6c41_f4b2_4935_af06_1d5c5642e9d7.slice. May 27 18:31:11.352931 systemd[1]: Created slice kubepods-besteffort-pod923329c2_959f_4193_b2be_f3dbcc05c0db.slice - libcontainer container kubepods-besteffort-pod923329c2_959f_4193_b2be_f3dbcc05c0db.slice. May 27 18:31:11.365829 systemd[1]: Created slice kubepods-besteffort-pod8ed10d2f_6ab3_40fd_8233_3e62f36b2ab4.slice - libcontainer container kubepods-besteffort-pod8ed10d2f_6ab3_40fd_8233_3e62f36b2ab4.slice. May 27 18:31:11.377407 containerd[1535]: time="2025-05-27T18:31:11.377356580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 18:31:11.391406 kubelet[1929]: I0527 18:31:11.391338 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923329c2-959f-4193-b2be-f3dbcc05c0db-config\") pod \"goldmane-78d55f7ddc-l2hf9\" (UID: \"923329c2-959f-4193-b2be-f3dbcc05c0db\") " pod="calico-system/goldmane-78d55f7ddc-l2hf9" May 27 18:31:11.391406 kubelet[1929]: I0527 18:31:11.391401 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/923329c2-959f-4193-b2be-f3dbcc05c0db-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-l2hf9\" (UID: \"923329c2-959f-4193-b2be-f3dbcc05c0db\") " pod="calico-system/goldmane-78d55f7ddc-l2hf9" May 27 18:31:11.391406 kubelet[1929]: I0527 18:31:11.391423 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlwv\" (UniqueName: \"kubernetes.io/projected/923329c2-959f-4193-b2be-f3dbcc05c0db-kube-api-access-2wlwv\") pod \"goldmane-78d55f7ddc-l2hf9\" (UID: \"923329c2-959f-4193-b2be-f3dbcc05c0db\") " pod="calico-system/goldmane-78d55f7ddc-l2hf9" May 27 18:31:11.391676 kubelet[1929]: I0527 18:31:11.391442 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-whisker-backend-key-pair\") pod \"whisker-55bcb9dc75-47rc8\" (UID: \"87f3a823-c410-4d8f-a90d-b0b1b3a0b283\") " pod="calico-system/whisker-55bcb9dc75-47rc8" May 27 18:31:11.391676 kubelet[1929]: I0527 18:31:11.391458 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spnjc\" (UniqueName: \"kubernetes.io/projected/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-kube-api-access-spnjc\") pod \"whisker-55bcb9dc75-47rc8\" (UID: \"87f3a823-c410-4d8f-a90d-b0b1b3a0b283\") " pod="calico-system/whisker-55bcb9dc75-47rc8" May 27 18:31:11.391676 kubelet[1929]: I0527 18:31:11.391474 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/505f6c41-f4b2-4935-af06-1d5c5642e9d7-calico-apiserver-certs\") pod \"calico-apiserver-5c896bff9c-f2xl7\" (UID: \"505f6c41-f4b2-4935-af06-1d5c5642e9d7\") " pod="calico-apiserver/calico-apiserver-5c896bff9c-f2xl7" May 27 18:31:11.391676 kubelet[1929]: I0527 18:31:11.391496 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4-calico-apiserver-certs\") pod \"calico-apiserver-5c896bff9c-ffh9v\" (UID: \"8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4\") " pod="calico-apiserver/calico-apiserver-5c896bff9c-ffh9v" May 27 18:31:11.391676 kubelet[1929]: I0527 18:31:11.391512 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq29f\" (UniqueName: \"kubernetes.io/projected/8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4-kube-api-access-cq29f\") pod \"calico-apiserver-5c896bff9c-ffh9v\" (UID: \"8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4\") " pod="calico-apiserver/calico-apiserver-5c896bff9c-ffh9v" May 27 18:31:11.391821 kubelet[1929]: I0527 18:31:11.391530 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/923329c2-959f-4193-b2be-f3dbcc05c0db-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-l2hf9\" (UID: \"923329c2-959f-4193-b2be-f3dbcc05c0db\") " pod="calico-system/goldmane-78d55f7ddc-l2hf9" May 27 18:31:11.391821 kubelet[1929]: I0527 18:31:11.391555 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-whisker-ca-bundle\") pod \"whisker-55bcb9dc75-47rc8\" (UID: \"87f3a823-c410-4d8f-a90d-b0b1b3a0b283\") " pod="calico-system/whisker-55bcb9dc75-47rc8" May 27 18:31:11.391821 kubelet[1929]: I0527 18:31:11.391585 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt96f\" (UniqueName: \"kubernetes.io/projected/505f6c41-f4b2-4935-af06-1d5c5642e9d7-kube-api-access-kt96f\") pod \"calico-apiserver-5c896bff9c-f2xl7\" (UID: \"505f6c41-f4b2-4935-af06-1d5c5642e9d7\") " pod="calico-apiserver/calico-apiserver-5c896bff9c-f2xl7" May 27 18:31:11.644401 containerd[1535]: time="2025-05-27T18:31:11.643929795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bcb9dc75-47rc8,Uid:87f3a823-c410-4d8f-a90d-b0b1b3a0b283,Namespace:calico-system,Attempt:0,}" May 27 18:31:11.655119 containerd[1535]: time="2025-05-27T18:31:11.654939197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c896bff9c-f2xl7,Uid:505f6c41-f4b2-4935-af06-1d5c5642e9d7,Namespace:calico-apiserver,Attempt:0,}" May 27 18:31:11.663661 containerd[1535]: time="2025-05-27T18:31:11.663597172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-l2hf9,Uid:923329c2-959f-4193-b2be-f3dbcc05c0db,Namespace:calico-system,Attempt:0,}" May 27 18:31:11.675012 containerd[1535]: time="2025-05-27T18:31:11.674799305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c896bff9c-ffh9v,Uid:8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4,Namespace:calico-apiserver,Attempt:0,}" May 27 18:31:11.758378 systemd-resolved[1401]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. May 27 18:31:11.814397 containerd[1535]: time="2025-05-27T18:31:11.814336255Z" level=error msg="Failed to destroy network for sandbox \"b614b645443c522f7da3d2acb9ceba69ba93fb2f5bdb17f14557c9c4efb5a225\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.815650 containerd[1535]: time="2025-05-27T18:31:11.815474118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bcb9dc75-47rc8,Uid:87f3a823-c410-4d8f-a90d-b0b1b3a0b283,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b614b645443c522f7da3d2acb9ceba69ba93fb2f5bdb17f14557c9c4efb5a225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.816118 kubelet[1929]: E0527 18:31:11.815723 1929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b614b645443c522f7da3d2acb9ceba69ba93fb2f5bdb17f14557c9c4efb5a225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.816118 kubelet[1929]: E0527 18:31:11.815794 1929 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b614b645443c522f7da3d2acb9ceba69ba93fb2f5bdb17f14557c9c4efb5a225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55bcb9dc75-47rc8" May 27 18:31:11.816118 kubelet[1929]: E0527 18:31:11.815816 1929 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b614b645443c522f7da3d2acb9ceba69ba93fb2f5bdb17f14557c9c4efb5a225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55bcb9dc75-47rc8" May 27 18:31:11.816533 kubelet[1929]: E0527 18:31:11.815873 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55bcb9dc75-47rc8_calico-system(87f3a823-c410-4d8f-a90d-b0b1b3a0b283)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55bcb9dc75-47rc8_calico-system(87f3a823-c410-4d8f-a90d-b0b1b3a0b283)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b614b645443c522f7da3d2acb9ceba69ba93fb2f5bdb17f14557c9c4efb5a225\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55bcb9dc75-47rc8" podUID="87f3a823-c410-4d8f-a90d-b0b1b3a0b283" May 27 18:31:11.824543 containerd[1535]: time="2025-05-27T18:31:11.824427022Z" level=error msg="Failed to destroy network for sandbox \"28cc0b50a72082b4fa065617f5726a5258a6c9e1163fccb3fbdc59967236620a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.826834 containerd[1535]: time="2025-05-27T18:31:11.826759612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-l2hf9,Uid:923329c2-959f-4193-b2be-f3dbcc05c0db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28cc0b50a72082b4fa065617f5726a5258a6c9e1163fccb3fbdc59967236620a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.828752 kubelet[1929]: E0527 18:31:11.828689 1929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28cc0b50a72082b4fa065617f5726a5258a6c9e1163fccb3fbdc59967236620a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.828893 kubelet[1929]: E0527 18:31:11.828764 1929 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28cc0b50a72082b4fa065617f5726a5258a6c9e1163fccb3fbdc59967236620a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-l2hf9" May 27 18:31:11.828893 kubelet[1929]: E0527 18:31:11.828786 1929 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28cc0b50a72082b4fa065617f5726a5258a6c9e1163fccb3fbdc59967236620a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-l2hf9" May 27 18:31:11.828893 kubelet[1929]: E0527 18:31:11.828843 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-l2hf9_calico-system(923329c2-959f-4193-b2be-f3dbcc05c0db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-l2hf9_calico-system(923329c2-959f-4193-b2be-f3dbcc05c0db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28cc0b50a72082b4fa065617f5726a5258a6c9e1163fccb3fbdc59967236620a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-l2hf9" podUID="923329c2-959f-4193-b2be-f3dbcc05c0db" May 27 18:31:11.835372 containerd[1535]: time="2025-05-27T18:31:11.835280789Z" level=error msg="Failed to destroy network for sandbox \"9b5e66c2238d8d3740d085a10314bb30fe96a8b11957ca437282e7d3383591ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.836447 containerd[1535]: time="2025-05-27T18:31:11.836364636Z" level=error msg="Failed to destroy network for sandbox \"207661bdcb654362251937b464c4de57d356c0ac8c31a22c8f64506986533671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.837206 containerd[1535]: time="2025-05-27T18:31:11.837118167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c896bff9c-f2xl7,Uid:505f6c41-f4b2-4935-af06-1d5c5642e9d7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5e66c2238d8d3740d085a10314bb30fe96a8b11957ca437282e7d3383591ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.837939 kubelet[1929]: E0527 18:31:11.837837 1929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5e66c2238d8d3740d085a10314bb30fe96a8b11957ca437282e7d3383591ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.838611 containerd[1535]: time="2025-05-27T18:31:11.838539623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c896bff9c-ffh9v,Uid:8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"207661bdcb654362251937b464c4de57d356c0ac8c31a22c8f64506986533671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.838774 kubelet[1929]: E0527 18:31:11.838488 1929 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5e66c2238d8d3740d085a10314bb30fe96a8b11957ca437282e7d3383591ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c896bff9c-f2xl7" May 27 18:31:11.838774 kubelet[1929]: E0527 18:31:11.838640 1929 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5e66c2238d8d3740d085a10314bb30fe96a8b11957ca437282e7d3383591ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c896bff9c-f2xl7" May 27 18:31:11.839009 kubelet[1929]: E0527 18:31:11.838816 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c896bff9c-f2xl7_calico-apiserver(505f6c41-f4b2-4935-af06-1d5c5642e9d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c896bff9c-f2xl7_calico-apiserver(505f6c41-f4b2-4935-af06-1d5c5642e9d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b5e66c2238d8d3740d085a10314bb30fe96a8b11957ca437282e7d3383591ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c896bff9c-f2xl7" podUID="505f6c41-f4b2-4935-af06-1d5c5642e9d7" May 27 18:31:11.839385 kubelet[1929]: E0527 18:31:11.839235 1929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"207661bdcb654362251937b464c4de57d356c0ac8c31a22c8f64506986533671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:11.839385 kubelet[1929]: E0527 18:31:11.839296 1929 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"207661bdcb654362251937b464c4de57d356c0ac8c31a22c8f64506986533671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c896bff9c-ffh9v" May 27 18:31:11.839385 kubelet[1929]: E0527 18:31:11.839336 1929 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"207661bdcb654362251937b464c4de57d356c0ac8c31a22c8f64506986533671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c896bff9c-ffh9v" May 27 18:31:11.839726 kubelet[1929]: E0527 18:31:11.839685 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c896bff9c-ffh9v_calico-apiserver(8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c896bff9c-ffh9v_calico-apiserver(8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"207661bdcb654362251937b464c4de57d356c0ac8c31a22c8f64506986533671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c896bff9c-ffh9v" podUID="8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4" May 27 18:31:12.112933 kubelet[1929]: E0527 18:31:12.112854 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:12.270924 systemd[1]: Created slice kubepods-besteffort-poda276bf69_1dea_406f_9796_048db395c71a.slice - libcontainer container kubepods-besteffort-poda276bf69_1dea_406f_9796_048db395c71a.slice. May 27 18:31:12.276828 containerd[1535]: time="2025-05-27T18:31:12.276632078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njssr,Uid:a276bf69-1dea-406f-9796-048db395c71a,Namespace:calico-system,Attempt:0,}" May 27 18:31:12.367657 containerd[1535]: time="2025-05-27T18:31:12.367408069Z" level=error msg="Failed to destroy network for sandbox \"982b3148d6f0f87a95afc55ddee059b19ef02cb56925a5cea5b2dacdba5f0beb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:12.368686 containerd[1535]: time="2025-05-27T18:31:12.368630466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njssr,Uid:a276bf69-1dea-406f-9796-048db395c71a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"982b3148d6f0f87a95afc55ddee059b19ef02cb56925a5cea5b2dacdba5f0beb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:12.369224 kubelet[1929]: E0527 18:31:12.369147 1929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"982b3148d6f0f87a95afc55ddee059b19ef02cb56925a5cea5b2dacdba5f0beb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:12.369347 kubelet[1929]: E0527 18:31:12.369233 1929 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"982b3148d6f0f87a95afc55ddee059b19ef02cb56925a5cea5b2dacdba5f0beb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njssr" May 27 18:31:12.369347 kubelet[1929]: E0527 18:31:12.369271 1929 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"982b3148d6f0f87a95afc55ddee059b19ef02cb56925a5cea5b2dacdba5f0beb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njssr" May 27 18:31:12.369459 kubelet[1929]: E0527 18:31:12.369354 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-njssr_calico-system(a276bf69-1dea-406f-9796-048db395c71a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-njssr_calico-system(a276bf69-1dea-406f-9796-048db395c71a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"982b3148d6f0f87a95afc55ddee059b19ef02cb56925a5cea5b2dacdba5f0beb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-njssr" podUID="a276bf69-1dea-406f-9796-048db395c71a" May 27 18:31:13.113130 kubelet[1929]: E0527 18:31:13.113050 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:14.113791 kubelet[1929]: E0527 18:31:14.113731 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:14.586153 systemd[1]: Created slice kubepods-besteffort-pod39cdc85a_afd7_457e_b68c_909d1c1ac18e.slice - libcontainer container kubepods-besteffort-pod39cdc85a_afd7_457e_b68c_909d1c1ac18e.slice. May 27 18:31:14.718867 kubelet[1929]: I0527 18:31:14.718821 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdww\" (UniqueName: \"kubernetes.io/projected/39cdc85a-afd7-457e-b68c-909d1c1ac18e-kube-api-access-2hdww\") pod \"nginx-deployment-7fcdb87857-fm8nw\" (UID: \"39cdc85a-afd7-457e-b68c-909d1c1ac18e\") " pod="default/nginx-deployment-7fcdb87857-fm8nw" May 27 18:31:14.893136 containerd[1535]: time="2025-05-27T18:31:14.892679386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-fm8nw,Uid:39cdc85a-afd7-457e-b68c-909d1c1ac18e,Namespace:default,Attempt:0,}" May 27 18:31:15.024041 containerd[1535]: time="2025-05-27T18:31:15.022650229Z" level=error msg="Failed to destroy network for sandbox \"99c2653a0d01f9485fd28a4b7218e7c5509c58aafc4d43b84e2cb39ef52a35b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:15.027702 containerd[1535]: time="2025-05-27T18:31:15.027635597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-fm8nw,Uid:39cdc85a-afd7-457e-b68c-909d1c1ac18e,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c2653a0d01f9485fd28a4b7218e7c5509c58aafc4d43b84e2cb39ef52a35b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:15.027895 systemd[1]: run-netns-cni\x2d48c70a57\x2d6d35\x2da727\x2d5af5\x2db610ff8094c4.mount: Deactivated successfully. May 27 18:31:15.029635 kubelet[1929]: E0527 18:31:15.029053 1929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c2653a0d01f9485fd28a4b7218e7c5509c58aafc4d43b84e2cb39ef52a35b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:31:15.029635 kubelet[1929]: E0527 18:31:15.029131 1929 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c2653a0d01f9485fd28a4b7218e7c5509c58aafc4d43b84e2cb39ef52a35b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-fm8nw" May 27 18:31:15.029635 kubelet[1929]: E0527 18:31:15.029158 1929 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c2653a0d01f9485fd28a4b7218e7c5509c58aafc4d43b84e2cb39ef52a35b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-fm8nw" May 27 18:31:15.029889 kubelet[1929]: E0527 18:31:15.029225 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-fm8nw_default(39cdc85a-afd7-457e-b68c-909d1c1ac18e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-fm8nw_default(39cdc85a-afd7-457e-b68c-909d1c1ac18e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99c2653a0d01f9485fd28a4b7218e7c5509c58aafc4d43b84e2cb39ef52a35b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-fm8nw" podUID="39cdc85a-afd7-457e-b68c-909d1c1ac18e" May 27 18:31:15.114426 kubelet[1929]: E0527 18:31:15.114265 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:16.114857 kubelet[1929]: E0527 18:31:16.114811 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:17.086820 kubelet[1929]: E0527 18:31:17.086633 1929 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:17.115385 kubelet[1929]: E0527 18:31:17.115240 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:18.082222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308718962.mount: Deactivated successfully. May 27 18:31:18.111431 containerd[1535]: time="2025-05-27T18:31:18.111365513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:18.112238 containerd[1535]: time="2025-05-27T18:31:18.112174314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 18:31:18.114021 containerd[1535]: time="2025-05-27T18:31:18.113000853Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:18.115069 containerd[1535]: time="2025-05-27T18:31:18.115031920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:18.115726 containerd[1535]: time="2025-05-27T18:31:18.115693183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.73829561s" May 27 18:31:18.115845 containerd[1535]: time="2025-05-27T18:31:18.115829665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 18:31:18.115935 kubelet[1929]: E0527 18:31:18.115903 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:18.159243 containerd[1535]: time="2025-05-27T18:31:18.159201069Z" level=info msg="CreateContainer within sandbox \"7695ac7914951cd7297c97f48cce8c44c67bfef9a50dff20ef1155c82026e893\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 18:31:18.171224 containerd[1535]: time="2025-05-27T18:31:18.171169421Z" level=info msg="Container 2094ae70c6e5addfa265abc976463d361910ea0e81967e8ba847365e97effc37: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:18.184465 containerd[1535]: time="2025-05-27T18:31:18.184388563Z" level=info msg="CreateContainer within sandbox \"7695ac7914951cd7297c97f48cce8c44c67bfef9a50dff20ef1155c82026e893\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2094ae70c6e5addfa265abc976463d361910ea0e81967e8ba847365e97effc37\"" May 27 18:31:18.185696 containerd[1535]: time="2025-05-27T18:31:18.185663236Z" level=info msg="StartContainer for \"2094ae70c6e5addfa265abc976463d361910ea0e81967e8ba847365e97effc37\"" May 27 18:31:18.187653 containerd[1535]: time="2025-05-27T18:31:18.187594745Z" level=info msg="connecting to shim 2094ae70c6e5addfa265abc976463d361910ea0e81967e8ba847365e97effc37" address="unix:///run/containerd/s/27fa9f106177f276b59b2269617bfd0796e014d2d3846dbcacafae222a252f34" protocol=ttrpc version=3 May 27 18:31:18.264359 systemd[1]: Started cri-containerd-2094ae70c6e5addfa265abc976463d361910ea0e81967e8ba847365e97effc37.scope - libcontainer container 2094ae70c6e5addfa265abc976463d361910ea0e81967e8ba847365e97effc37. May 27 18:31:18.333863 containerd[1535]: time="2025-05-27T18:31:18.333796431Z" level=info msg="StartContainer for \"2094ae70c6e5addfa265abc976463d361910ea0e81967e8ba847365e97effc37\" returns successfully" May 27 18:31:18.469466 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 18:31:18.469664 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 18:31:19.116195 kubelet[1929]: E0527 18:31:19.116137 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:19.407054 kubelet[1929]: I0527 18:31:19.406034 1929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:31:20.117892 kubelet[1929]: E0527 18:31:20.117828 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:20.616236 systemd-networkd[1453]: vxlan.calico: Link UP May 27 18:31:20.616248 systemd-networkd[1453]: vxlan.calico: Gained carrier May 27 18:31:21.118129 kubelet[1929]: E0527 18:31:21.118058 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:22.119265 kubelet[1929]: E0527 18:31:22.119140 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:22.252721 containerd[1535]: time="2025-05-27T18:31:22.252678317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bcb9dc75-47rc8,Uid:87f3a823-c410-4d8f-a90d-b0b1b3a0b283,Namespace:calico-system,Attempt:0,}" May 27 18:31:22.380186 systemd-networkd[1453]: vxlan.calico: Gained IPv6LL May 27 18:31:22.518043 systemd-networkd[1453]: calib534e0e1d2b: Link UP May 27 18:31:22.519758 systemd-networkd[1453]: calib534e0e1d2b: Gained carrier May 27 18:31:22.532880 kubelet[1929]: I0527 18:31:22.532817 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8rkk7" podStartSLOduration=7.638114834 podStartE2EDuration="25.532786176s" podCreationTimestamp="2025-05-27 18:30:57 +0000 UTC" firstStartedPulling="2025-05-27 18:31:00.222623804 +0000 UTC m=+3.941826818" lastFinishedPulling="2025-05-27 18:31:18.117295155 +0000 UTC m=+21.836498160" observedRunningTime="2025-05-27 18:31:18.424122411 +0000 UTC m=+22.143325423" watchObservedRunningTime="2025-05-27 18:31:22.532786176 +0000 UTC m=+26.251989408" May 27 18:31:22.540043 containerd[1535]: 2025-05-27 18:31:22.328 [INFO][2881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0 whisker-55bcb9dc75- calico-system 87f3a823-c410-4d8f-a90d-b0b1b3a0b283 3069 0 2025-05-27 18:30:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55bcb9dc75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 146.190.128.44 whisker-55bcb9dc75-47rc8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib534e0e1d2b [] [] }} ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Namespace="calico-system" Pod="whisker-55bcb9dc75-47rc8" WorkloadEndpoint="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-" May 27 18:31:22.540043 containerd[1535]: 2025-05-27 18:31:22.328 [INFO][2881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Namespace="calico-system" Pod="whisker-55bcb9dc75-47rc8" WorkloadEndpoint="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:22.540043 containerd[1535]: 2025-05-27 18:31:22.424 [INFO][2892] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:22.540410 containerd[1535]: 2025-05-27 18:31:22.425 [INFO][2892] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"146.190.128.44", "pod":"whisker-55bcb9dc75-47rc8", "timestamp":"2025-05-27 18:31:22.424768544 +0000 UTC"}, Hostname:"146.190.128.44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:31:22.540410 containerd[1535]: 2025-05-27 18:31:22.425 [INFO][2892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:22.540410 containerd[1535]: 2025-05-27 18:31:22.425 [INFO][2892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:22.540410 containerd[1535]: 2025-05-27 18:31:22.425 [INFO][2892] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.128.44' May 27 18:31:22.540410 containerd[1535]: 2025-05-27 18:31:22.443 [INFO][2892] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" host="146.190.128.44" May 27 18:31:22.540410 containerd[1535]: 2025-05-27 18:31:22.475 [INFO][2892] ipam/ipam.go 394: Looking up existing affinities for host host="146.190.128.44" May 27 18:31:22.540410 containerd[1535]: 2025-05-27 18:31:22.484 [INFO][2892] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="146.190.128.44" May 27 18:31:22.540410 containerd[1535]: 2025-05-27 18:31:22.487 [INFO][2892] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:22.540410 containerd[1535]: 2025-05-27 18:31:22.490 [INFO][2892] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:22.540410 containerd[1535]: 2025-05-27 18:31:22.490 [INFO][2892] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" host="146.190.128.44" May 27 18:31:22.540828 containerd[1535]: 2025-05-27 18:31:22.493 [INFO][2892] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee May 27 18:31:22.540828 containerd[1535]: 2025-05-27 18:31:22.499 [INFO][2892] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" host="146.190.128.44" May 27 18:31:22.540828 containerd[1535]: 2025-05-27 18:31:22.507 [INFO][2892] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.1/26] block=192.168.90.0/26 handle="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" host="146.190.128.44" May 27 18:31:22.540828 containerd[1535]: 2025-05-27 18:31:22.507 [INFO][2892] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.1/26] handle="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" host="146.190.128.44" May 27 18:31:22.540828 containerd[1535]: 2025-05-27 18:31:22.507 [INFO][2892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:22.540828 containerd[1535]: 2025-05-27 18:31:22.507 [INFO][2892] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.1/26] IPv6=[] ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:22.542769 containerd[1535]: 2025-05-27 18:31:22.511 [INFO][2881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Namespace="calico-system" Pod="whisker-55bcb9dc75-47rc8" WorkloadEndpoint="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0", GenerateName:"whisker-55bcb9dc75-", Namespace:"calico-system", SelfLink:"", UID:"87f3a823-c410-4d8f-a90d-b0b1b3a0b283", ResourceVersion:"3069", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 30, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55bcb9dc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"", Pod:"whisker-55bcb9dc75-47rc8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.90.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib534e0e1d2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:22.542769 containerd[1535]: 2025-05-27 18:31:22.511 [INFO][2881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.1/32] ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Namespace="calico-system" Pod="whisker-55bcb9dc75-47rc8" WorkloadEndpoint="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:22.542949 containerd[1535]: 2025-05-27 18:31:22.511 [INFO][2881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib534e0e1d2b ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Namespace="calico-system" Pod="whisker-55bcb9dc75-47rc8" WorkloadEndpoint="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:22.542949 containerd[1535]: 2025-05-27 18:31:22.521 [INFO][2881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Namespace="calico-system" Pod="whisker-55bcb9dc75-47rc8" WorkloadEndpoint="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:22.543840 containerd[1535]: 2025-05-27 18:31:22.521 [INFO][2881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Namespace="calico-system" Pod="whisker-55bcb9dc75-47rc8" WorkloadEndpoint="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0", GenerateName:"whisker-55bcb9dc75-", Namespace:"calico-system", SelfLink:"", UID:"87f3a823-c410-4d8f-a90d-b0b1b3a0b283", ResourceVersion:"3069", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 30, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55bcb9dc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee", Pod:"whisker-55bcb9dc75-47rc8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.90.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib534e0e1d2b", MAC:"32:9d:60:81:15:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:22.544073 containerd[1535]: 2025-05-27 18:31:22.534 [INFO][2881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Namespace="calico-system" Pod="whisker-55bcb9dc75-47rc8" WorkloadEndpoint="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:22.609190 containerd[1535]: time="2025-05-27T18:31:22.609122813Z" level=info msg="connecting to shim f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" address="unix:///run/containerd/s/fbceb85fdfd89aa95ad9234f4f592a00c8cc4bc51f23d60954d3c25225d79dfe" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:22.647344 systemd[1]: Started cri-containerd-f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee.scope - libcontainer container f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee. May 27 18:31:22.708389 containerd[1535]: time="2025-05-27T18:31:22.708319105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bcb9dc75-47rc8,Uid:87f3a823-c410-4d8f-a90d-b0b1b3a0b283,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\"" May 27 18:31:22.712731 containerd[1535]: time="2025-05-27T18:31:22.712506763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:31:22.946425 containerd[1535]: time="2025-05-27T18:31:22.946200043Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:31:22.947224 containerd[1535]: time="2025-05-27T18:31:22.947114019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:31:22.947224 containerd[1535]: time="2025-05-27T18:31:22.947173203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:31:22.947530 kubelet[1929]: E0527 18:31:22.947471 1929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:31:22.947633 kubelet[1929]: E0527 18:31:22.947544 1929 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:31:22.948147 kubelet[1929]: E0527 18:31:22.948024 1929 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:32c2af8bc6d04c3e98eae78dd587e8ad,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-spnjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bcb9dc75-47rc8_calico-system(87f3a823-c410-4d8f-a90d-b0b1b3a0b283): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:31:22.951126 containerd[1535]: time="2025-05-27T18:31:22.951064750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:31:23.119607 kubelet[1929]: E0527 18:31:23.119547 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:23.215775 containerd[1535]: time="2025-05-27T18:31:23.215334400Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:31:23.216611 containerd[1535]: time="2025-05-27T18:31:23.216158253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:31:23.216611 containerd[1535]: time="2025-05-27T18:31:23.216295171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:31:23.217612 kubelet[1929]: E0527 18:31:23.216819 1929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:31:23.217612 kubelet[1929]: E0527 18:31:23.216878 1929 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:31:23.217742 kubelet[1929]: E0527 18:31:23.217224 1929 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spnjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bcb9dc75-47rc8_calico-system(87f3a823-c410-4d8f-a90d-b0b1b3a0b283): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:31:23.218561 kubelet[1929]: E0527 18:31:23.218450 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-55bcb9dc75-47rc8" podUID="87f3a823-c410-4d8f-a90d-b0b1b3a0b283" May 27 18:31:23.256032 containerd[1535]: time="2025-05-27T18:31:23.255743517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-l2hf9,Uid:923329c2-959f-4193-b2be-f3dbcc05c0db,Namespace:calico-system,Attempt:0,}" May 27 18:31:23.256867 containerd[1535]: time="2025-05-27T18:31:23.256650744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njssr,Uid:a276bf69-1dea-406f-9796-048db395c71a,Namespace:calico-system,Attempt:0,}" May 27 18:31:23.421669 kubelet[1929]: E0527 18:31:23.421582 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-55bcb9dc75-47rc8" podUID="87f3a823-c410-4d8f-a90d-b0b1b3a0b283" May 27 18:31:23.451684 systemd-networkd[1453]: caliaed1bd403ea: Link UP May 27 18:31:23.456050 systemd-networkd[1453]: caliaed1bd403ea: Gained carrier May 27 18:31:23.488037 containerd[1535]: 2025-05-27 18:31:23.327 [INFO][2960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.128.44-k8s-csi--node--driver--njssr-eth0 csi-node-driver- calico-system a276bf69-1dea-406f-9796-048db395c71a 2991 0 2025-05-27 18:30:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 146.190.128.44 csi-node-driver-njssr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaed1bd403ea [] [] }} ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Namespace="calico-system" Pod="csi-node-driver-njssr" WorkloadEndpoint="146.190.128.44-k8s-csi--node--driver--njssr-" May 27 18:31:23.488037 containerd[1535]: 2025-05-27 18:31:23.328 [INFO][2960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Namespace="calico-system" Pod="csi-node-driver-njssr" WorkloadEndpoint="146.190.128.44-k8s-csi--node--driver--njssr-eth0" May 27 18:31:23.488037 containerd[1535]: 2025-05-27 18:31:23.371 [INFO][2987] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" HandleID="k8s-pod-network.b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Workload="146.190.128.44-k8s-csi--node--driver--njssr-eth0" May 27 18:31:23.489579 containerd[1535]: 2025-05-27 18:31:23.372 [INFO][2987] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" HandleID="k8s-pod-network.b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Workload="146.190.128.44-k8s-csi--node--driver--njssr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233670), Attrs:map[string]string{"namespace":"calico-system", "node":"146.190.128.44", "pod":"csi-node-driver-njssr", "timestamp":"2025-05-27 18:31:23.371956135 +0000 UTC"}, Hostname:"146.190.128.44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:31:23.489579 containerd[1535]: 2025-05-27 18:31:23.372 [INFO][2987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:23.489579 containerd[1535]: 2025-05-27 18:31:23.372 [INFO][2987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:23.489579 containerd[1535]: 2025-05-27 18:31:23.372 [INFO][2987] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.128.44' May 27 18:31:23.489579 containerd[1535]: 2025-05-27 18:31:23.386 [INFO][2987] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" host="146.190.128.44" May 27 18:31:23.489579 containerd[1535]: 2025-05-27 18:31:23.396 [INFO][2987] ipam/ipam.go 394: Looking up existing affinities for host host="146.190.128.44" May 27 18:31:23.489579 containerd[1535]: 2025-05-27 18:31:23.407 [INFO][2987] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="146.190.128.44" May 27 18:31:23.489579 containerd[1535]: 2025-05-27 18:31:23.411 [INFO][2987] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:23.489579 containerd[1535]: 2025-05-27 18:31:23.415 [INFO][2987] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:23.489579 containerd[1535]: 2025-05-27 18:31:23.415 [INFO][2987] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" host="146.190.128.44" May 27 18:31:23.489889 containerd[1535]: 2025-05-27 18:31:23.418 [INFO][2987] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e May 27 18:31:23.489889 containerd[1535]: 2025-05-27 18:31:23.426 [INFO][2987] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" host="146.190.128.44" May 27 18:31:23.489889 containerd[1535]: 2025-05-27 18:31:23.438 [INFO][2987] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.2/26] block=192.168.90.0/26 handle="k8s-pod-network.b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" host="146.190.128.44" May 27 18:31:23.489889 containerd[1535]: 2025-05-27 18:31:23.439 [INFO][2987] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.2/26] handle="k8s-pod-network.b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" host="146.190.128.44" May 27 18:31:23.489889 containerd[1535]: 2025-05-27 18:31:23.439 [INFO][2987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:23.489889 containerd[1535]: 2025-05-27 18:31:23.439 [INFO][2987] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.2/26] IPv6=[] ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" HandleID="k8s-pod-network.b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Workload="146.190.128.44-k8s-csi--node--driver--njssr-eth0" May 27 18:31:23.490053 containerd[1535]: 2025-05-27 18:31:23.444 [INFO][2960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Namespace="calico-system" Pod="csi-node-driver-njssr" WorkloadEndpoint="146.190.128.44-k8s-csi--node--driver--njssr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-csi--node--driver--njssr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a276bf69-1dea-406f-9796-048db395c71a", ResourceVersion:"2991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 30, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"", Pod:"csi-node-driver-njssr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaed1bd403ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:23.491703 containerd[1535]: 2025-05-27 18:31:23.444 [INFO][2960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.2/32] ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Namespace="calico-system" Pod="csi-node-driver-njssr" WorkloadEndpoint="146.190.128.44-k8s-csi--node--driver--njssr-eth0" May 27 18:31:23.491703 containerd[1535]: 2025-05-27 18:31:23.444 [INFO][2960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaed1bd403ea ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Namespace="calico-system" Pod="csi-node-driver-njssr" WorkloadEndpoint="146.190.128.44-k8s-csi--node--driver--njssr-eth0" May 27 18:31:23.491703 containerd[1535]: 2025-05-27 18:31:23.462 [INFO][2960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Namespace="calico-system" Pod="csi-node-driver-njssr" WorkloadEndpoint="146.190.128.44-k8s-csi--node--driver--njssr-eth0" May 27 18:31:23.491818 containerd[1535]: 2025-05-27 18:31:23.463 [INFO][2960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Namespace="calico-system" Pod="csi-node-driver-njssr" WorkloadEndpoint="146.190.128.44-k8s-csi--node--driver--njssr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-csi--node--driver--njssr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a276bf69-1dea-406f-9796-048db395c71a", ResourceVersion:"2991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 30, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e", Pod:"csi-node-driver-njssr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaed1bd403ea", MAC:"f2:63:0b:95:e0:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:23.491880 containerd[1535]: 2025-05-27 18:31:23.482 [INFO][2960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" Namespace="calico-system" Pod="csi-node-driver-njssr" WorkloadEndpoint="146.190.128.44-k8s-csi--node--driver--njssr-eth0" May 27 18:31:23.530552 containerd[1535]: time="2025-05-27T18:31:23.530423607Z" level=info msg="connecting to shim b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e" address="unix:///run/containerd/s/584b7c188d56bcea2ad609c68f3e3b8630fa5592f9eb6c49b94e43771254594a" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:23.561835 systemd-networkd[1453]: cali4f9e2874fc5: Link UP May 27 18:31:23.562379 systemd-networkd[1453]: cali4f9e2874fc5: Gained carrier May 27 18:31:23.580440 systemd[1]: Started cri-containerd-b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e.scope - libcontainer container b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e. May 27 18:31:23.589891 containerd[1535]: 2025-05-27 18:31:23.326 [INFO][2963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0 goldmane-78d55f7ddc- calico-system 923329c2-959f-4193-b2be-f3dbcc05c0db 3071 0 2025-05-27 18:30:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 146.190.128.44 goldmane-78d55f7ddc-l2hf9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4f9e2874fc5 [] [] }} ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l2hf9" WorkloadEndpoint="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-" May 27 18:31:23.589891 containerd[1535]: 2025-05-27 18:31:23.326 [INFO][2963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l2hf9" WorkloadEndpoint="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0" May 27 18:31:23.589891 containerd[1535]: 2025-05-27 18:31:23.377 [INFO][2985] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" HandleID="k8s-pod-network.d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Workload="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0" May 27 18:31:23.590278 containerd[1535]: 2025-05-27 18:31:23.378 [INFO][2985] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" HandleID="k8s-pod-network.d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Workload="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d99a0), Attrs:map[string]string{"namespace":"calico-system", "node":"146.190.128.44", "pod":"goldmane-78d55f7ddc-l2hf9", "timestamp":"2025-05-27 18:31:23.377161246 +0000 UTC"}, Hostname:"146.190.128.44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:31:23.590278 containerd[1535]: 2025-05-27 18:31:23.378 [INFO][2985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:23.590278 containerd[1535]: 2025-05-27 18:31:23.439 [INFO][2985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:23.590278 containerd[1535]: 2025-05-27 18:31:23.439 [INFO][2985] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.128.44' May 27 18:31:23.590278 containerd[1535]: 2025-05-27 18:31:23.489 [INFO][2985] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" host="146.190.128.44" May 27 18:31:23.590278 containerd[1535]: 2025-05-27 18:31:23.501 [INFO][2985] ipam/ipam.go 394: Looking up existing affinities for host host="146.190.128.44" May 27 18:31:23.590278 containerd[1535]: 2025-05-27 18:31:23.511 [INFO][2985] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="146.190.128.44" May 27 18:31:23.590278 containerd[1535]: 2025-05-27 18:31:23.519 [INFO][2985] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:23.590278 containerd[1535]: 2025-05-27 18:31:23.525 [INFO][2985] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:23.590278 containerd[1535]: 2025-05-27 18:31:23.525 [INFO][2985] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" host="146.190.128.44" May 27 18:31:23.591205 containerd[1535]: 2025-05-27 18:31:23.528 [INFO][2985] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962 May 27 18:31:23.591205 containerd[1535]: 2025-05-27 18:31:23.535 [INFO][2985] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" host="146.190.128.44" May 27 18:31:23.591205 containerd[1535]: 2025-05-27 18:31:23.544 [INFO][2985] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.3/26] block=192.168.90.0/26 handle="k8s-pod-network.d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" host="146.190.128.44" May 27 18:31:23.591205 containerd[1535]: 2025-05-27 18:31:23.547 [INFO][2985] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.3/26] handle="k8s-pod-network.d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" host="146.190.128.44" May 27 18:31:23.591205 containerd[1535]: 2025-05-27 18:31:23.547 [INFO][2985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:23.591205 containerd[1535]: 2025-05-27 18:31:23.548 [INFO][2985] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.3/26] IPv6=[] ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" HandleID="k8s-pod-network.d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Workload="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0" May 27 18:31:23.591537 containerd[1535]: 2025-05-27 18:31:23.556 [INFO][2963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l2hf9" WorkloadEndpoint="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"923329c2-959f-4193-b2be-f3dbcc05c0db", ResourceVersion:"3071", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 30, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"", Pod:"goldmane-78d55f7ddc-l2hf9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.90.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f9e2874fc5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:23.591537 containerd[1535]: 2025-05-27 18:31:23.556 [INFO][2963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.3/32] ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l2hf9" WorkloadEndpoint="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0" May 27 18:31:23.591683 containerd[1535]: 2025-05-27 18:31:23.556 [INFO][2963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f9e2874fc5 ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l2hf9" WorkloadEndpoint="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0" May 27 18:31:23.591683 containerd[1535]: 2025-05-27 18:31:23.563 [INFO][2963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l2hf9" WorkloadEndpoint="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0" May 27 18:31:23.591801 containerd[1535]: 2025-05-27 18:31:23.564 [INFO][2963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l2hf9" WorkloadEndpoint="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"923329c2-959f-4193-b2be-f3dbcc05c0db", ResourceVersion:"3071", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 30, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962", Pod:"goldmane-78d55f7ddc-l2hf9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.90.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f9e2874fc5", MAC:"36:90:22:af:49:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:23.591901 containerd[1535]: 2025-05-27 18:31:23.583 [INFO][2963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l2hf9" WorkloadEndpoint="146.190.128.44-k8s-goldmane--78d55f7ddc--l2hf9-eth0" May 27 18:31:23.627735 containerd[1535]: time="2025-05-27T18:31:23.626197800Z" level=info msg="connecting to shim d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962" address="unix:///run/containerd/s/4262cc4f3bddd7c1e02ac241dc6514496c642ef742dde8a64435708b3a1246c7" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:23.664746 containerd[1535]: time="2025-05-27T18:31:23.663610668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njssr,Uid:a276bf69-1dea-406f-9796-048db395c71a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e\"" May 27 18:31:23.669787 containerd[1535]: time="2025-05-27T18:31:23.669727540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 18:31:23.687363 systemd[1]: Started cri-containerd-d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962.scope - libcontainer container d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962. May 27 18:31:23.756790 containerd[1535]: time="2025-05-27T18:31:23.756659344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-l2hf9,Uid:923329c2-959f-4193-b2be-f3dbcc05c0db,Namespace:calico-system,Attempt:0,} returns sandbox id \"d01cf35876aff02a7473e3245e34a3117d48fa6600d7a743d14ad1a8a238a962\"" May 27 18:31:24.108828 systemd-networkd[1453]: calib534e0e1d2b: Gained IPv6LL May 27 18:31:24.120385 kubelet[1929]: E0527 18:31:24.120275 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:24.426670 kubelet[1929]: E0527 18:31:24.426484 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-55bcb9dc75-47rc8" podUID="87f3a823-c410-4d8f-a90d-b0b1b3a0b283" May 27 18:31:24.492282 systemd-networkd[1453]: caliaed1bd403ea: Gained IPv6LL May 27 18:31:24.748548 systemd-networkd[1453]: cali4f9e2874fc5: Gained IPv6LL May 27 18:31:25.122178 kubelet[1929]: E0527 18:31:25.122113 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:25.255970 containerd[1535]: time="2025-05-27T18:31:25.255908521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c896bff9c-f2xl7,Uid:505f6c41-f4b2-4935-af06-1d5c5642e9d7,Namespace:calico-apiserver,Attempt:0,}" May 27 18:31:25.447126 systemd-networkd[1453]: cali7beff0fb98a: Link UP May 27 18:31:25.450113 systemd-networkd[1453]: cali7beff0fb98a: Gained carrier May 27 18:31:25.552806 containerd[1535]: 2025-05-27 18:31:25.314 [INFO][3112] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0 calico-apiserver-5c896bff9c- calico-apiserver 505f6c41-f4b2-4935-af06-1d5c5642e9d7 3070 0 2025-05-27 18:30:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c896bff9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 146.190.128.44 calico-apiserver-5c896bff9c-f2xl7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7beff0fb98a [] [] }} ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-f2xl7" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-" May 27 18:31:25.552806 containerd[1535]: 2025-05-27 18:31:25.314 [INFO][3112] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-f2xl7" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0" May 27 18:31:25.552806 containerd[1535]: 2025-05-27 18:31:25.356 [INFO][3124] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" HandleID="k8s-pod-network.a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Workload="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0" May 27 18:31:25.553205 containerd[1535]: 2025-05-27 18:31:25.357 [INFO][3124] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" HandleID="k8s-pod-network.a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Workload="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"146.190.128.44", "pod":"calico-apiserver-5c896bff9c-f2xl7", "timestamp":"2025-05-27 18:31:25.356775903 +0000 UTC"}, Hostname:"146.190.128.44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:31:25.553205 containerd[1535]: 2025-05-27 18:31:25.357 [INFO][3124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:25.553205 containerd[1535]: 2025-05-27 18:31:25.357 [INFO][3124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:25.553205 containerd[1535]: 2025-05-27 18:31:25.357 [INFO][3124] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.128.44' May 27 18:31:25.553205 containerd[1535]: 2025-05-27 18:31:25.376 [INFO][3124] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" host="146.190.128.44" May 27 18:31:25.553205 containerd[1535]: 2025-05-27 18:31:25.391 [INFO][3124] ipam/ipam.go 394: Looking up existing affinities for host host="146.190.128.44" May 27 18:31:25.553205 containerd[1535]: 2025-05-27 18:31:25.403 [INFO][3124] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="146.190.128.44" May 27 18:31:25.553205 containerd[1535]: 2025-05-27 18:31:25.408 [INFO][3124] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:25.553205 containerd[1535]: 2025-05-27 18:31:25.414 [INFO][3124] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:25.553435 containerd[1535]: 2025-05-27 18:31:25.414 [INFO][3124] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" host="146.190.128.44" May 27 18:31:25.553435 containerd[1535]: 2025-05-27 18:31:25.417 [INFO][3124] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8 May 27 18:31:25.553435 containerd[1535]: 2025-05-27 18:31:25.424 [INFO][3124] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" host="146.190.128.44" May 27 18:31:25.553435 containerd[1535]: 2025-05-27 18:31:25.434 [INFO][3124] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.4/26] block=192.168.90.0/26 handle="k8s-pod-network.a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" host="146.190.128.44" May 27 18:31:25.553435 containerd[1535]: 2025-05-27 18:31:25.434 [INFO][3124] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.4/26] handle="k8s-pod-network.a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" host="146.190.128.44" May 27 18:31:25.553435 containerd[1535]: 2025-05-27 18:31:25.434 [INFO][3124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:25.553435 containerd[1535]: 2025-05-27 18:31:25.435 [INFO][3124] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.4/26] IPv6=[] ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" HandleID="k8s-pod-network.a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Workload="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0" May 27 18:31:25.553610 containerd[1535]: 2025-05-27 18:31:25.438 [INFO][3112] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-f2xl7" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0", GenerateName:"calico-apiserver-5c896bff9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"505f6c41-f4b2-4935-af06-1d5c5642e9d7", ResourceVersion:"3070", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 30, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c896bff9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"", Pod:"calico-apiserver-5c896bff9c-f2xl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7beff0fb98a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:25.553825 containerd[1535]: 2025-05-27 18:31:25.438 [INFO][3112] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.4/32] ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-f2xl7" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0" May 27 18:31:25.553825 containerd[1535]: 2025-05-27 18:31:25.438 [INFO][3112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7beff0fb98a ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-f2xl7" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0" May 27 18:31:25.553825 containerd[1535]: 2025-05-27 18:31:25.457 [INFO][3112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-f2xl7" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0" May 27 18:31:25.553905 containerd[1535]: 2025-05-27 18:31:25.468 [INFO][3112] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-f2xl7" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0", GenerateName:"calico-apiserver-5c896bff9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"505f6c41-f4b2-4935-af06-1d5c5642e9d7", ResourceVersion:"3070", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 30, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c896bff9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8", Pod:"calico-apiserver-5c896bff9c-f2xl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7beff0fb98a", MAC:"42:b4:36:f2:ef:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:25.553968 containerd[1535]: 2025-05-27 18:31:25.545 [INFO][3112] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-f2xl7" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--f2xl7-eth0" May 27 18:31:25.618130 containerd[1535]: time="2025-05-27T18:31:25.616888919Z" level=info msg="connecting to shim a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8" address="unix:///run/containerd/s/96e432b4b82854f4291b09f893f6c0f4f7dd478ceb60e1ee9c8b5484d6dd6a2d" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:25.673442 systemd[1]: Started cri-containerd-a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8.scope - libcontainer container a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8. May 27 18:31:25.713313 update_engine[1519]: I20250527 18:31:25.713033 1519 update_attempter.cc:509] Updating boot flags... May 27 18:31:25.753206 containerd[1535]: time="2025-05-27T18:31:25.753139876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:25.753776 containerd[1535]: time="2025-05-27T18:31:25.753733669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 18:31:25.755369 containerd[1535]: time="2025-05-27T18:31:25.754574890Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:25.757076 containerd[1535]: time="2025-05-27T18:31:25.757032203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:25.757690 containerd[1535]: time="2025-05-27T18:31:25.757658582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.087878694s" May 27 18:31:25.757815 containerd[1535]: time="2025-05-27T18:31:25.757800182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 18:31:25.768296 containerd[1535]: time="2025-05-27T18:31:25.768215074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:31:25.781376 containerd[1535]: time="2025-05-27T18:31:25.781332466Z" level=info msg="CreateContainer within sandbox \"b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 18:31:25.849657 containerd[1535]: time="2025-05-27T18:31:25.849598094Z" level=info msg="Container 17fee52ca174a717cf7207280c60cfe6c694ccf4163da1386d9379a30d3d70b4: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:25.857590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851028900.mount: Deactivated successfully. May 27 18:31:25.944910 containerd[1535]: time="2025-05-27T18:31:25.940681715Z" level=info msg="CreateContainer within sandbox \"b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"17fee52ca174a717cf7207280c60cfe6c694ccf4163da1386d9379a30d3d70b4\"" May 27 18:31:25.944910 containerd[1535]: time="2025-05-27T18:31:25.944325418Z" level=info msg="StartContainer for \"17fee52ca174a717cf7207280c60cfe6c694ccf4163da1386d9379a30d3d70b4\"" May 27 18:31:25.964184 containerd[1535]: time="2025-05-27T18:31:25.964047081Z" level=info msg="connecting to shim 17fee52ca174a717cf7207280c60cfe6c694ccf4163da1386d9379a30d3d70b4" address="unix:///run/containerd/s/584b7c188d56bcea2ad609c68f3e3b8630fa5592f9eb6c49b94e43771254594a" protocol=ttrpc version=3 May 27 18:31:25.967570 containerd[1535]: time="2025-05-27T18:31:25.966753132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c896bff9c-f2xl7,Uid:505f6c41-f4b2-4935-af06-1d5c5642e9d7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8\"" May 27 18:31:25.988746 containerd[1535]: time="2025-05-27T18:31:25.988690245Z" level=info msg="StopPodSandbox for \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\"" May 27 18:31:26.009298 systemd[1]: Started cri-containerd-17fee52ca174a717cf7207280c60cfe6c694ccf4163da1386d9379a30d3d70b4.scope - libcontainer container 17fee52ca174a717cf7207280c60cfe6c694ccf4163da1386d9379a30d3d70b4. May 27 18:31:26.009706 systemd[1]: cri-containerd-f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee.scope: Deactivated successfully. May 27 18:31:26.022522 containerd[1535]: time="2025-05-27T18:31:26.022458752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\" id:\"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\" pid:2946 exit_status:137 exited_at:{seconds:1748370686 nanos:19290230}" May 27 18:31:26.081030 containerd[1535]: time="2025-05-27T18:31:26.080917092Z" level=info msg="received exit event sandbox_id:\"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\" exit_status:137 exited_at:{seconds:1748370686 nanos:19290230}" May 27 18:31:26.081172 containerd[1535]: time="2025-05-27T18:31:26.081146620Z" level=info msg="shim disconnected" id=f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee namespace=k8s.io May 27 18:31:26.081172 containerd[1535]: time="2025-05-27T18:31:26.081162725Z" level=warning msg="cleaning up after shim disconnected" id=f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee namespace=k8s.io May 27 18:31:26.081291 containerd[1535]: time="2025-05-27T18:31:26.081170019Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 18:31:26.123652 kubelet[1929]: E0527 18:31:26.123279 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:26.129240 containerd[1535]: time="2025-05-27T18:31:26.129123973Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:31:26.132488 containerd[1535]: time="2025-05-27T18:31:26.132369161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:31:26.134840 containerd[1535]: time="2025-05-27T18:31:26.133856840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:31:26.135195 kubelet[1929]: E0527 18:31:26.134073 1929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:31:26.135195 kubelet[1929]: E0527 18:31:26.134137 1929 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:31:26.135195 kubelet[1929]: E0527 18:31:26.134421 1929 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2wlwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-l2hf9_calico-system(923329c2-959f-4193-b2be-f3dbcc05c0db): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:31:26.140685 kubelet[1929]: E0527 18:31:26.136159 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l2hf9" podUID="923329c2-959f-4193-b2be-f3dbcc05c0db" May 27 18:31:26.141158 containerd[1535]: time="2025-05-27T18:31:26.140457625Z" level=info msg="StartContainer for \"17fee52ca174a717cf7207280c60cfe6c694ccf4163da1386d9379a30d3d70b4\" returns successfully" May 27 18:31:26.141698 containerd[1535]: time="2025-05-27T18:31:26.141278660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 18:31:26.244127 systemd-networkd[1453]: calib534e0e1d2b: Link DOWN May 27 18:31:26.244139 systemd-networkd[1453]: calib534e0e1d2b: Lost carrier May 27 18:31:26.259144 containerd[1535]: time="2025-05-27T18:31:26.255965048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c896bff9c-ffh9v,Uid:8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4,Namespace:calico-apiserver,Attempt:0,}" May 27 18:31:26.276787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee-rootfs.mount: Deactivated successfully. May 27 18:31:26.277024 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee-shm.mount: Deactivated successfully. May 27 18:31:26.437188 kubelet[1929]: I0527 18:31:26.437157 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:26.444305 kubelet[1929]: E0527 18:31:26.444251 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l2hf9" podUID="923329c2-959f-4193-b2be-f3dbcc05c0db" May 27 18:31:26.456444 containerd[1535]: 2025-05-27 18:31:26.241 [INFO][3264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:26.456444 containerd[1535]: 2025-05-27 18:31:26.241 [INFO][3264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" iface="eth0" netns="/var/run/netns/cni-398c6576-be00-88a2-eb9a-34ffdc452c19" May 27 18:31:26.456444 containerd[1535]: 2025-05-27 18:31:26.242 [INFO][3264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" iface="eth0" netns="/var/run/netns/cni-398c6576-be00-88a2-eb9a-34ffdc452c19" May 27 18:31:26.456444 containerd[1535]: 2025-05-27 18:31:26.253 [INFO][3264] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" after=11.632125ms iface="eth0" netns="/var/run/netns/cni-398c6576-be00-88a2-eb9a-34ffdc452c19" May 27 18:31:26.456444 containerd[1535]: 2025-05-27 18:31:26.254 [INFO][3264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:26.456444 containerd[1535]: 2025-05-27 18:31:26.257 [INFO][3264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:26.456444 containerd[1535]: 2025-05-27 18:31:26.341 [INFO][3292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:26.456444 containerd[1535]: 2025-05-27 18:31:26.342 [INFO][3292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:26.456444 containerd[1535]: 2025-05-27 18:31:26.343 [INFO][3292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:26.457190 containerd[1535]: 2025-05-27 18:31:26.440 [INFO][3292] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:26.457190 containerd[1535]: 2025-05-27 18:31:26.440 [INFO][3292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:26.457190 containerd[1535]: 2025-05-27 18:31:26.446 [INFO][3292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:26.457190 containerd[1535]: 2025-05-27 18:31:26.454 [INFO][3264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:26.460048 systemd[1]: run-netns-cni\x2d398c6576\x2dbe00\x2d88a2\x2deb9a\x2d34ffdc452c19.mount: Deactivated successfully. May 27 18:31:26.461742 containerd[1535]: time="2025-05-27T18:31:26.461700999Z" level=info msg="TearDown network for sandbox \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\" successfully" May 27 18:31:26.462744 containerd[1535]: time="2025-05-27T18:31:26.462538233Z" level=info msg="StopPodSandbox for \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\" returns successfully" May 27 18:31:26.621973 kubelet[1929]: I0527 18:31:26.619896 1929 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-whisker-ca-bundle\") pod \"87f3a823-c410-4d8f-a90d-b0b1b3a0b283\" (UID: \"87f3a823-c410-4d8f-a90d-b0b1b3a0b283\") " May 27 18:31:26.621973 kubelet[1929]: I0527 18:31:26.619978 1929 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spnjc\" (UniqueName: \"kubernetes.io/projected/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-kube-api-access-spnjc\") pod \"87f3a823-c410-4d8f-a90d-b0b1b3a0b283\" (UID: \"87f3a823-c410-4d8f-a90d-b0b1b3a0b283\") " May 27 18:31:26.621973 kubelet[1929]: I0527 18:31:26.620014 1929 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-whisker-backend-key-pair\") pod \"87f3a823-c410-4d8f-a90d-b0b1b3a0b283\" (UID: \"87f3a823-c410-4d8f-a90d-b0b1b3a0b283\") " May 27 18:31:26.624433 kubelet[1929]: I0527 18:31:26.624377 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "87f3a823-c410-4d8f-a90d-b0b1b3a0b283" (UID: "87f3a823-c410-4d8f-a90d-b0b1b3a0b283"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 18:31:26.640367 systemd[1]: var-lib-kubelet-pods-87f3a823\x2dc410\x2d4d8f\x2da90d\x2db0b1b3a0b283-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 18:31:26.649160 systemd[1]: var-lib-kubelet-pods-87f3a823\x2dc410\x2d4d8f\x2da90d\x2db0b1b3a0b283-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dspnjc.mount: Deactivated successfully. May 27 18:31:26.652639 kubelet[1929]: I0527 18:31:26.651075 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "87f3a823-c410-4d8f-a90d-b0b1b3a0b283" (UID: "87f3a823-c410-4d8f-a90d-b0b1b3a0b283"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 18:31:26.657361 kubelet[1929]: I0527 18:31:26.657287 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-kube-api-access-spnjc" (OuterVolumeSpecName: "kube-api-access-spnjc") pod "87f3a823-c410-4d8f-a90d-b0b1b3a0b283" (UID: "87f3a823-c410-4d8f-a90d-b0b1b3a0b283"). InnerVolumeSpecName "kube-api-access-spnjc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 18:31:26.657535 systemd-networkd[1453]: calie67464fc8aa: Link UP May 27 18:31:26.659862 systemd-networkd[1453]: calie67464fc8aa: Gained carrier May 27 18:31:26.682855 containerd[1535]: 2025-05-27 18:31:26.368 [INFO][3299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0 calico-apiserver-5c896bff9c- calico-apiserver 8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4 3073 0 2025-05-27 18:30:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c896bff9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 146.190.128.44 calico-apiserver-5c896bff9c-ffh9v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie67464fc8aa [] [] }} ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-ffh9v" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-" May 27 18:31:26.682855 containerd[1535]: 2025-05-27 18:31:26.368 [INFO][3299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-ffh9v" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0" May 27 18:31:26.682855 containerd[1535]: 2025-05-27 18:31:26.436 [INFO][3317] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" HandleID="k8s-pod-network.68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Workload="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0" May 27 18:31:26.684319 containerd[1535]: 2025-05-27 18:31:26.438 [INFO][3317] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" HandleID="k8s-pod-network.68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Workload="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"146.190.128.44", "pod":"calico-apiserver-5c896bff9c-ffh9v", "timestamp":"2025-05-27 18:31:26.436249301 +0000 UTC"}, Hostname:"146.190.128.44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:31:26.684319 containerd[1535]: 2025-05-27 18:31:26.438 [INFO][3317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:26.684319 containerd[1535]: 2025-05-27 18:31:26.446 [INFO][3317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:26.684319 containerd[1535]: 2025-05-27 18:31:26.446 [INFO][3317] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.128.44' May 27 18:31:26.684319 containerd[1535]: 2025-05-27 18:31:26.479 [INFO][3317] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" host="146.190.128.44" May 27 18:31:26.684319 containerd[1535]: 2025-05-27 18:31:26.545 [INFO][3317] ipam/ipam.go 394: Looking up existing affinities for host host="146.190.128.44" May 27 18:31:26.684319 containerd[1535]: 2025-05-27 18:31:26.589 [INFO][3317] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="146.190.128.44" May 27 18:31:26.684319 containerd[1535]: 2025-05-27 18:31:26.594 [INFO][3317] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:26.684319 containerd[1535]: 2025-05-27 18:31:26.598 [INFO][3317] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:26.684627 containerd[1535]: 2025-05-27 18:31:26.598 [INFO][3317] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" host="146.190.128.44" May 27 18:31:26.684627 containerd[1535]: 2025-05-27 18:31:26.601 [INFO][3317] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee May 27 18:31:26.684627 containerd[1535]: 2025-05-27 18:31:26.606 [INFO][3317] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" host="146.190.128.44" May 27 18:31:26.684627 containerd[1535]: 2025-05-27 18:31:26.617 [INFO][3317] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.5/26] block=192.168.90.0/26 handle="k8s-pod-network.68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" host="146.190.128.44" May 27 18:31:26.684627 containerd[1535]: 2025-05-27 18:31:26.617 [INFO][3317] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.5/26] handle="k8s-pod-network.68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" host="146.190.128.44" May 27 18:31:26.684627 containerd[1535]: 2025-05-27 18:31:26.617 [INFO][3317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:26.684627 containerd[1535]: 2025-05-27 18:31:26.617 [INFO][3317] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.5/26] IPv6=[] ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" HandleID="k8s-pod-network.68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Workload="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0" May 27 18:31:26.684817 containerd[1535]: 2025-05-27 18:31:26.623 [INFO][3299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-ffh9v" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0", GenerateName:"calico-apiserver-5c896bff9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4", ResourceVersion:"3073", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c896bff9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"", Pod:"calico-apiserver-5c896bff9c-ffh9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie67464fc8aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:26.685050 containerd[1535]: 2025-05-27 18:31:26.623 [INFO][3299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.5/32] ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-ffh9v" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0" May 27 18:31:26.685050 containerd[1535]: 2025-05-27 18:31:26.623 [INFO][3299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie67464fc8aa ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-ffh9v" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0" May 27 18:31:26.685050 containerd[1535]: 2025-05-27 18:31:26.661 [INFO][3299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-ffh9v" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0" May 27 18:31:26.685133 containerd[1535]: 2025-05-27 18:31:26.662 [INFO][3299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-ffh9v" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0", GenerateName:"calico-apiserver-5c896bff9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4", ResourceVersion:"3073", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c896bff9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee", Pod:"calico-apiserver-5c896bff9c-ffh9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie67464fc8aa", MAC:"1e:40:e4:7a:06:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:26.685194 containerd[1535]: 2025-05-27 18:31:26.677 [INFO][3299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" Namespace="calico-apiserver" Pod="calico-apiserver-5c896bff9c-ffh9v" WorkloadEndpoint="146.190.128.44-k8s-calico--apiserver--5c896bff9c--ffh9v-eth0" May 27 18:31:26.714590 containerd[1535]: time="2025-05-27T18:31:26.714499269Z" level=info msg="connecting to shim 68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee" address="unix:///run/containerd/s/2d6d97292a5986d79b5f18c54f66f2c7e2adfc108bb20867cef1bec2ad08f90c" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:26.720957 kubelet[1929]: I0527 18:31:26.720872 1929 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-spnjc\" (UniqueName: \"kubernetes.io/projected/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-kube-api-access-spnjc\") on node \"146.190.128.44\" DevicePath \"\"" May 27 18:31:26.720957 kubelet[1929]: I0527 18:31:26.720915 1929 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-whisker-backend-key-pair\") on node \"146.190.128.44\" DevicePath \"\"" May 27 18:31:26.720957 kubelet[1929]: I0527 18:31:26.720927 1929 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87f3a823-c410-4d8f-a90d-b0b1b3a0b283-whisker-ca-bundle\") on node \"146.190.128.44\" DevicePath \"\"" May 27 18:31:26.754393 systemd[1]: Started cri-containerd-68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee.scope - libcontainer container 68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee. May 27 18:31:26.797685 kubelet[1929]: I0527 18:31:26.797265 1929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:31:26.881722 containerd[1535]: time="2025-05-27T18:31:26.881533529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c896bff9c-ffh9v,Uid:8ed10d2f-6ab3-40fd-8233-3e62f36b2ab4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee\"" May 27 18:31:26.951171 containerd[1535]: time="2025-05-27T18:31:26.951120437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2094ae70c6e5addfa265abc976463d361910ea0e81967e8ba847365e97effc37\" id:\"6ff00dfb6b5b1a589a9f5715f02430c1e275043fd2c351e2a3877947e6a3a07b\" pid:3388 exited_at:{seconds:1748370686 nanos:950287944}" May 27 18:31:27.068400 containerd[1535]: time="2025-05-27T18:31:27.068344700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2094ae70c6e5addfa265abc976463d361910ea0e81967e8ba847365e97effc37\" id:\"07f74549e352a5c57df260e2cfdb42c051a8df63f633a9f77c81f47c1bf79a60\" pid:3417 exited_at:{seconds:1748370687 nanos:67534250}" May 27 18:31:27.124674 kubelet[1929]: E0527 18:31:27.124600 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:27.245257 systemd-networkd[1453]: cali7beff0fb98a: Gained IPv6LL May 27 18:31:27.279111 systemd[1]: Removed slice kubepods-besteffort-pod87f3a823_c410_4d8f_a90d_b0b1b3a0b283.slice - libcontainer container kubepods-besteffort-pod87f3a823_c410_4d8f_a90d_b0b1b3a0b283.slice. May 27 18:31:28.076258 systemd-networkd[1453]: calie67464fc8aa: Gained IPv6LL May 27 18:31:28.125575 kubelet[1929]: E0527 18:31:28.125519 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:28.252090 containerd[1535]: time="2025-05-27T18:31:28.252040866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-fm8nw,Uid:39cdc85a-afd7-457e-b68c-909d1c1ac18e,Namespace:default,Attempt:0,}" May 27 18:31:28.454964 systemd-networkd[1453]: cali2d882d3d1b1: Link UP May 27 18:31:28.456431 systemd-networkd[1453]: cali2d882d3d1b1: Gained carrier May 27 18:31:28.472196 containerd[1535]: 2025-05-27 18:31:28.318 [INFO][3432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0 nginx-deployment-7fcdb87857- default 39cdc85a-afd7-457e-b68c-909d1c1ac18e 3102 0 2025-05-27 18:31:14 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 146.190.128.44 nginx-deployment-7fcdb87857-fm8nw eth0 default [] [] [kns.default ksa.default.default] cali2d882d3d1b1 [] [] }} ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Namespace="default" Pod="nginx-deployment-7fcdb87857-fm8nw" WorkloadEndpoint="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-" May 27 18:31:28.472196 containerd[1535]: 2025-05-27 18:31:28.319 [INFO][3432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Namespace="default" Pod="nginx-deployment-7fcdb87857-fm8nw" WorkloadEndpoint="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0" May 27 18:31:28.472196 containerd[1535]: 2025-05-27 18:31:28.366 [INFO][3444] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" HandleID="k8s-pod-network.a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Workload="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0" May 27 18:31:28.473774 containerd[1535]: 2025-05-27 18:31:28.366 [INFO][3444] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" HandleID="k8s-pod-network.a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Workload="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002379b0), Attrs:map[string]string{"namespace":"default", "node":"146.190.128.44", "pod":"nginx-deployment-7fcdb87857-fm8nw", "timestamp":"2025-05-27 18:31:28.366254642 +0000 UTC"}, Hostname:"146.190.128.44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:31:28.473774 containerd[1535]: 2025-05-27 18:31:28.369 [INFO][3444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:28.473774 containerd[1535]: 2025-05-27 18:31:28.369 [INFO][3444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:28.473774 containerd[1535]: 2025-05-27 18:31:28.369 [INFO][3444] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.128.44' May 27 18:31:28.473774 containerd[1535]: 2025-05-27 18:31:28.381 [INFO][3444] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" host="146.190.128.44" May 27 18:31:28.473774 containerd[1535]: 2025-05-27 18:31:28.390 [INFO][3444] ipam/ipam.go 394: Looking up existing affinities for host host="146.190.128.44" May 27 18:31:28.473774 containerd[1535]: 2025-05-27 18:31:28.398 [INFO][3444] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="146.190.128.44" May 27 18:31:28.473774 containerd[1535]: 2025-05-27 18:31:28.401 [INFO][3444] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:28.473774 containerd[1535]: 2025-05-27 18:31:28.407 [INFO][3444] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:28.473774 containerd[1535]: 2025-05-27 18:31:28.407 [INFO][3444] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" host="146.190.128.44" May 27 18:31:28.475692 containerd[1535]: 2025-05-27 18:31:28.410 [INFO][3444] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5 May 27 18:31:28.475692 containerd[1535]: 2025-05-27 18:31:28.417 [INFO][3444] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" host="146.190.128.44" May 27 18:31:28.475692 containerd[1535]: 2025-05-27 18:31:28.428 [INFO][3444] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.6/26] block=192.168.90.0/26 handle="k8s-pod-network.a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" host="146.190.128.44" May 27 18:31:28.475692 containerd[1535]: 2025-05-27 18:31:28.429 [INFO][3444] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.6/26] handle="k8s-pod-network.a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" host="146.190.128.44" May 27 18:31:28.475692 containerd[1535]: 2025-05-27 18:31:28.429 [INFO][3444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:28.475692 containerd[1535]: 2025-05-27 18:31:28.429 [INFO][3444] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.6/26] IPv6=[] ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" HandleID="k8s-pod-network.a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Workload="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0" May 27 18:31:28.475882 containerd[1535]: 2025-05-27 18:31:28.432 [INFO][3432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Namespace="default" Pod="nginx-deployment-7fcdb87857-fm8nw" WorkloadEndpoint="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"39cdc85a-afd7-457e-b68c-909d1c1ac18e", ResourceVersion:"3102", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-fm8nw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali2d882d3d1b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:28.475882 containerd[1535]: 2025-05-27 18:31:28.432 [INFO][3432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.6/32] ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Namespace="default" Pod="nginx-deployment-7fcdb87857-fm8nw" WorkloadEndpoint="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0" May 27 18:31:28.476112 containerd[1535]: 2025-05-27 18:31:28.432 [INFO][3432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d882d3d1b1 ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Namespace="default" Pod="nginx-deployment-7fcdb87857-fm8nw" WorkloadEndpoint="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0" May 27 18:31:28.476112 containerd[1535]: 2025-05-27 18:31:28.456 [INFO][3432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Namespace="default" Pod="nginx-deployment-7fcdb87857-fm8nw" WorkloadEndpoint="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0" May 27 18:31:28.476166 containerd[1535]: 2025-05-27 18:31:28.457 [INFO][3432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Namespace="default" Pod="nginx-deployment-7fcdb87857-fm8nw" WorkloadEndpoint="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"39cdc85a-afd7-457e-b68c-909d1c1ac18e", ResourceVersion:"3102", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5", Pod:"nginx-deployment-7fcdb87857-fm8nw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali2d882d3d1b1", MAC:"d6:29:b0:0d:fd:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:28.476270 containerd[1535]: 2025-05-27 18:31:28.467 [INFO][3432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" Namespace="default" Pod="nginx-deployment-7fcdb87857-fm8nw" WorkloadEndpoint="146.190.128.44-k8s-nginx--deployment--7fcdb87857--fm8nw-eth0" May 27 18:31:28.521019 containerd[1535]: time="2025-05-27T18:31:28.520949057Z" level=info msg="connecting to shim a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5" address="unix:///run/containerd/s/1267ac73fe49ee3a828bb66cdf01d037ae548a34d902928f902b54256576a954" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:28.566393 systemd[1]: Started cri-containerd-a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5.scope - libcontainer container a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5. May 27 18:31:28.635715 containerd[1535]: time="2025-05-27T18:31:28.635652414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-fm8nw,Uid:39cdc85a-afd7-457e-b68c-909d1c1ac18e,Namespace:default,Attempt:0,} returns sandbox id \"a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5\"" May 27 18:31:29.126082 kubelet[1929]: E0527 18:31:29.126014 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:29.256178 kubelet[1929]: I0527 18:31:29.256136 1929 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f3a823-c410-4d8f-a90d-b0b1b3a0b283" path="/var/lib/kubelet/pods/87f3a823-c410-4d8f-a90d-b0b1b3a0b283/volumes" May 27 18:31:29.612195 systemd-networkd[1453]: cali2d882d3d1b1: Gained IPv6LL May 27 18:31:29.713043 containerd[1535]: time="2025-05-27T18:31:29.712237202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:29.714006 containerd[1535]: time="2025-05-27T18:31:29.713943057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 18:31:29.716418 containerd[1535]: time="2025-05-27T18:31:29.716373726Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:29.719417 containerd[1535]: time="2025-05-27T18:31:29.719372343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:29.720229 containerd[1535]: time="2025-05-27T18:31:29.720190021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.578866408s" May 27 18:31:29.720503 containerd[1535]: time="2025-05-27T18:31:29.720381135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 18:31:29.722851 containerd[1535]: time="2025-05-27T18:31:29.722500800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 18:31:29.725608 containerd[1535]: time="2025-05-27T18:31:29.725172189Z" level=info msg="CreateContainer within sandbox \"a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 18:31:29.740234 containerd[1535]: time="2025-05-27T18:31:29.739270443Z" level=info msg="Container 4515d44086c7f846c32e7d33ab8b90af73cccfc460797bc5aaa3ced3ab46dfcc: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:29.750196 containerd[1535]: time="2025-05-27T18:31:29.750140977Z" level=info msg="CreateContainer within sandbox \"a1b29321975fe1b41df7de6aa96e93039039801e31ba79e0454b963e07fc4ea8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4515d44086c7f846c32e7d33ab8b90af73cccfc460797bc5aaa3ced3ab46dfcc\"" May 27 18:31:29.751373 containerd[1535]: time="2025-05-27T18:31:29.751332775Z" level=info msg="StartContainer for \"4515d44086c7f846c32e7d33ab8b90af73cccfc460797bc5aaa3ced3ab46dfcc\"" May 27 18:31:29.753178 containerd[1535]: time="2025-05-27T18:31:29.753134075Z" level=info msg="connecting to shim 4515d44086c7f846c32e7d33ab8b90af73cccfc460797bc5aaa3ced3ab46dfcc" address="unix:///run/containerd/s/96e432b4b82854f4291b09f893f6c0f4f7dd478ceb60e1ee9c8b5484d6dd6a2d" protocol=ttrpc version=3 May 27 18:31:29.792312 systemd[1]: Started cri-containerd-4515d44086c7f846c32e7d33ab8b90af73cccfc460797bc5aaa3ced3ab46dfcc.scope - libcontainer container 4515d44086c7f846c32e7d33ab8b90af73cccfc460797bc5aaa3ced3ab46dfcc. May 27 18:31:29.870969 containerd[1535]: time="2025-05-27T18:31:29.869238958Z" level=info msg="StartContainer for \"4515d44086c7f846c32e7d33ab8b90af73cccfc460797bc5aaa3ced3ab46dfcc\" returns successfully" May 27 18:31:30.126735 kubelet[1929]: E0527 18:31:30.126551 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:31.127331 kubelet[1929]: E0527 18:31:31.127240 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:31.482766 kubelet[1929]: I0527 18:31:31.482563 1929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:31:31.956050 containerd[1535]: time="2025-05-27T18:31:31.955955292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:31.957226 containerd[1535]: time="2025-05-27T18:31:31.956915350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 18:31:31.957866 containerd[1535]: time="2025-05-27T18:31:31.957783250Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:31.961466 containerd[1535]: time="2025-05-27T18:31:31.960440819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:31.962186 containerd[1535]: time="2025-05-27T18:31:31.962114671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.238787112s" May 27 18:31:31.962442 containerd[1535]: time="2025-05-27T18:31:31.962394044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 18:31:31.964254 containerd[1535]: time="2025-05-27T18:31:31.964180621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 18:31:31.980873 containerd[1535]: time="2025-05-27T18:31:31.980656676Z" level=info msg="CreateContainer within sandbox \"b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 18:31:31.991804 containerd[1535]: time="2025-05-27T18:31:31.991750996Z" level=info msg="Container 5f708fe2794e754fbd470354bed2a4cc44b5dfe38282710cadf643c0a5a959ed: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:32.011428 containerd[1535]: time="2025-05-27T18:31:32.011351397Z" level=info msg="CreateContainer within sandbox \"b09755df48a5d09ea6f4fee350379892363ad014ca5f70d9a00f71b45d4b1c1e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5f708fe2794e754fbd470354bed2a4cc44b5dfe38282710cadf643c0a5a959ed\"" May 27 18:31:32.012263 containerd[1535]: time="2025-05-27T18:31:32.012213352Z" level=info msg="StartContainer for \"5f708fe2794e754fbd470354bed2a4cc44b5dfe38282710cadf643c0a5a959ed\"" May 27 18:31:32.016117 containerd[1535]: time="2025-05-27T18:31:32.016051399Z" level=info msg="connecting to shim 5f708fe2794e754fbd470354bed2a4cc44b5dfe38282710cadf643c0a5a959ed" address="unix:///run/containerd/s/584b7c188d56bcea2ad609c68f3e3b8630fa5592f9eb6c49b94e43771254594a" protocol=ttrpc version=3 May 27 18:31:32.061611 systemd[1]: Started cri-containerd-5f708fe2794e754fbd470354bed2a4cc44b5dfe38282710cadf643c0a5a959ed.scope - libcontainer container 5f708fe2794e754fbd470354bed2a4cc44b5dfe38282710cadf643c0a5a959ed. May 27 18:31:32.124591 containerd[1535]: time="2025-05-27T18:31:32.124542195Z" level=info msg="StartContainer for \"5f708fe2794e754fbd470354bed2a4cc44b5dfe38282710cadf643c0a5a959ed\" returns successfully" May 27 18:31:32.127690 kubelet[1929]: E0527 18:31:32.127630 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:32.273237 kubelet[1929]: I0527 18:31:32.273169 1929 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 18:31:32.275072 kubelet[1929]: I0527 18:31:32.274713 1929 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 18:31:32.356388 containerd[1535]: time="2025-05-27T18:31:32.356312884Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:32.359026 containerd[1535]: time="2025-05-27T18:31:32.358057272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 18:31:32.360077 containerd[1535]: time="2025-05-27T18:31:32.360025684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 395.774774ms" May 27 18:31:32.360077 containerd[1535]: time="2025-05-27T18:31:32.360073785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 18:31:32.361278 containerd[1535]: time="2025-05-27T18:31:32.361251597Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 27 18:31:32.365836 containerd[1535]: time="2025-05-27T18:31:32.365676206Z" level=info msg="CreateContainer within sandbox \"68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 18:31:32.381031 containerd[1535]: time="2025-05-27T18:31:32.379605513Z" level=info msg="Container 0ee2b0bab9c945ef53e72cd385200d1b1c4f2dd9b66f817498d8e2421de28584: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:32.391366 containerd[1535]: time="2025-05-27T18:31:32.391320700Z" level=info msg="CreateContainer within sandbox \"68fe40a375210d7faac3afb80bdab326245d928524f01e1aa94a43e81a9630ee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0ee2b0bab9c945ef53e72cd385200d1b1c4f2dd9b66f817498d8e2421de28584\"" May 27 18:31:32.392351 containerd[1535]: time="2025-05-27T18:31:32.392293154Z" level=info msg="StartContainer for \"0ee2b0bab9c945ef53e72cd385200d1b1c4f2dd9b66f817498d8e2421de28584\"" May 27 18:31:32.394103 containerd[1535]: time="2025-05-27T18:31:32.394041215Z" level=info msg="connecting to shim 0ee2b0bab9c945ef53e72cd385200d1b1c4f2dd9b66f817498d8e2421de28584" address="unix:///run/containerd/s/2d6d97292a5986d79b5f18c54f66f2c7e2adfc108bb20867cef1bec2ad08f90c" protocol=ttrpc version=3 May 27 18:31:32.433346 systemd[1]: Started cri-containerd-0ee2b0bab9c945ef53e72cd385200d1b1c4f2dd9b66f817498d8e2421de28584.scope - libcontainer container 0ee2b0bab9c945ef53e72cd385200d1b1c4f2dd9b66f817498d8e2421de28584. May 27 18:31:32.524238 kubelet[1929]: I0527 18:31:32.523489 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-njssr" podStartSLOduration=27.227381726 podStartE2EDuration="35.522870268s" podCreationTimestamp="2025-05-27 18:30:57 +0000 UTC" firstStartedPulling="2025-05-27 18:31:23.668187604 +0000 UTC m=+27.387390608" lastFinishedPulling="2025-05-27 18:31:31.963676147 +0000 UTC m=+35.682879150" observedRunningTime="2025-05-27 18:31:32.522654871 +0000 UTC m=+36.241857885" watchObservedRunningTime="2025-05-27 18:31:32.522870268 +0000 UTC m=+36.242073279" May 27 18:31:32.524238 kubelet[1929]: I0527 18:31:32.523720 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c896bff9c-f2xl7" podStartSLOduration=64.77457549 podStartE2EDuration="1m8.523707236s" podCreationTimestamp="2025-05-27 18:30:24 +0000 UTC" firstStartedPulling="2025-05-27 18:31:25.972547939 +0000 UTC m=+29.691750942" lastFinishedPulling="2025-05-27 18:31:29.721679697 +0000 UTC m=+33.440882688" observedRunningTime="2025-05-27 18:31:30.493260105 +0000 UTC m=+34.212463118" watchObservedRunningTime="2025-05-27 18:31:32.523707236 +0000 UTC m=+36.242910272" May 27 18:31:32.527905 containerd[1535]: time="2025-05-27T18:31:32.527836073Z" level=info msg="StartContainer for \"0ee2b0bab9c945ef53e72cd385200d1b1c4f2dd9b66f817498d8e2421de28584\" returns successfully" May 27 18:31:33.128566 kubelet[1929]: E0527 18:31:33.128477 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:33.524964 kubelet[1929]: I0527 18:31:33.524852 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c896bff9c-ffh9v" podStartSLOduration=63.050197058 podStartE2EDuration="1m8.524825733s" podCreationTimestamp="2025-05-27 18:30:25 +0000 UTC" firstStartedPulling="2025-05-27 18:31:26.886510512 +0000 UTC m=+30.605713523" lastFinishedPulling="2025-05-27 18:31:32.361139208 +0000 UTC m=+36.080342198" observedRunningTime="2025-05-27 18:31:33.523099316 +0000 UTC m=+37.242302331" watchObservedRunningTime="2025-05-27 18:31:33.524825733 +0000 UTC m=+37.244028744" May 27 18:31:34.129178 kubelet[1929]: E0527 18:31:34.129101 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:35.130209 kubelet[1929]: E0527 18:31:35.130137 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:35.709220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017446411.mount: Deactivated successfully. May 27 18:31:36.130440 kubelet[1929]: E0527 18:31:36.130381 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:37.087227 kubelet[1929]: E0527 18:31:37.087174 1929 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:37.131578 kubelet[1929]: E0527 18:31:37.131519 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:37.220841 containerd[1535]: time="2025-05-27T18:31:37.220713175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:37.222341 containerd[1535]: time="2025-05-27T18:31:37.222232698Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73308117" May 27 18:31:37.223663 containerd[1535]: time="2025-05-27T18:31:37.223275165Z" level=info msg="ImageCreate event name:\"sha256:93ad19b5b847f64ffb1df64c55e6da69a9ea1c9c00af759cc5d1851adf649cad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:37.242381 containerd[1535]: time="2025-05-27T18:31:37.242318725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d05f253bbd7e7775260835f038c9a389140350699c88c7f0fbbb44a44db71668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:37.243453 containerd[1535]: time="2025-05-27T18:31:37.243393874Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:93ad19b5b847f64ffb1df64c55e6da69a9ea1c9c00af759cc5d1851adf649cad\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d05f253bbd7e7775260835f038c9a389140350699c88c7f0fbbb44a44db71668\", size \"73307995\" in 4.881984533s" May 27 18:31:37.243453 containerd[1535]: time="2025-05-27T18:31:37.243447913Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:93ad19b5b847f64ffb1df64c55e6da69a9ea1c9c00af759cc5d1851adf649cad\"" May 27 18:31:37.256395 containerd[1535]: time="2025-05-27T18:31:37.255185110Z" level=info msg="CreateContainer within sandbox \"a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 27 18:31:37.283742 containerd[1535]: time="2025-05-27T18:31:37.283672728Z" level=info msg="Container fb6e5b6480b979e12ba24105ce80cff9cb4280981d0064642a8de166f101e014: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:37.285455 containerd[1535]: time="2025-05-27T18:31:37.284377590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:31:37.291184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67132306.mount: Deactivated successfully. May 27 18:31:37.318515 containerd[1535]: time="2025-05-27T18:31:37.318351937Z" level=info msg="CreateContainer within sandbox \"a9da98613fbb2ee14af69974488ad4cbce94baaa9f6696bdec67925c2208fea5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fb6e5b6480b979e12ba24105ce80cff9cb4280981d0064642a8de166f101e014\"" May 27 18:31:37.319937 containerd[1535]: time="2025-05-27T18:31:37.319859101Z" level=info msg="StartContainer for \"fb6e5b6480b979e12ba24105ce80cff9cb4280981d0064642a8de166f101e014\"" May 27 18:31:37.322838 containerd[1535]: time="2025-05-27T18:31:37.322370575Z" level=info msg="connecting to shim fb6e5b6480b979e12ba24105ce80cff9cb4280981d0064642a8de166f101e014" address="unix:///run/containerd/s/1267ac73fe49ee3a828bb66cdf01d037ae548a34d902928f902b54256576a954" protocol=ttrpc version=3 May 27 18:31:37.367383 systemd[1]: Started cri-containerd-fb6e5b6480b979e12ba24105ce80cff9cb4280981d0064642a8de166f101e014.scope - libcontainer container fb6e5b6480b979e12ba24105ce80cff9cb4280981d0064642a8de166f101e014. May 27 18:31:37.422439 containerd[1535]: time="2025-05-27T18:31:37.422340522Z" level=info msg="StartContainer for \"fb6e5b6480b979e12ba24105ce80cff9cb4280981d0064642a8de166f101e014\" returns successfully" May 27 18:31:37.535865 kubelet[1929]: I0527 18:31:37.535552 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-fm8nw" podStartSLOduration=14.924284906 podStartE2EDuration="23.535529047s" podCreationTimestamp="2025-05-27 18:31:14 +0000 UTC" firstStartedPulling="2025-05-27 18:31:28.638184025 +0000 UTC m=+32.357387046" lastFinishedPulling="2025-05-27 18:31:37.249428189 +0000 UTC m=+40.968631187" observedRunningTime="2025-05-27 18:31:37.535513858 +0000 UTC m=+41.254716870" watchObservedRunningTime="2025-05-27 18:31:37.535529047 +0000 UTC m=+41.254732059" May 27 18:31:37.557365 containerd[1535]: time="2025-05-27T18:31:37.557291772Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:31:37.558668 containerd[1535]: time="2025-05-27T18:31:37.558521670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:31:37.558668 containerd[1535]: time="2025-05-27T18:31:37.558595564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:31:37.559624 kubelet[1929]: E0527 18:31:37.559316 1929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:31:37.559624 kubelet[1929]: E0527 18:31:37.559458 1929 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:31:37.561193 kubelet[1929]: E0527 18:31:37.561028 1929 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2wlwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-l2hf9_calico-system(923329c2-959f-4193-b2be-f3dbcc05c0db): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:31:37.562593 kubelet[1929]: E0527 18:31:37.562370 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l2hf9" podUID="923329c2-959f-4193-b2be-f3dbcc05c0db" May 27 18:31:38.132349 kubelet[1929]: E0527 18:31:38.132263 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:39.133443 kubelet[1929]: E0527 18:31:39.133373 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:40.134339 kubelet[1929]: E0527 18:31:40.134274 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:41.135290 kubelet[1929]: E0527 18:31:41.135221 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:42.136373 kubelet[1929]: E0527 18:31:42.136269 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:43.137278 kubelet[1929]: E0527 18:31:43.137199 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:43.283015 systemd[1]: Created slice kubepods-besteffort-pod5f10c39b_5881_4eb1_bc96_05501483b0f7.slice - libcontainer container kubepods-besteffort-pod5f10c39b_5881_4eb1_bc96_05501483b0f7.slice. May 27 18:31:43.343594 kubelet[1929]: I0527 18:31:43.343446 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5f10c39b-5881-4eb1-bc96-05501483b0f7-data\") pod \"nfs-server-provisioner-0\" (UID: \"5f10c39b-5881-4eb1-bc96-05501483b0f7\") " pod="default/nfs-server-provisioner-0" May 27 18:31:43.343594 kubelet[1929]: I0527 18:31:43.343526 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pstbb\" (UniqueName: \"kubernetes.io/projected/5f10c39b-5881-4eb1-bc96-05501483b0f7-kube-api-access-pstbb\") pod \"nfs-server-provisioner-0\" (UID: \"5f10c39b-5881-4eb1-bc96-05501483b0f7\") " pod="default/nfs-server-provisioner-0" May 27 18:31:43.589051 containerd[1535]: time="2025-05-27T18:31:43.588912653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5f10c39b-5881-4eb1-bc96-05501483b0f7,Namespace:default,Attempt:0,}" May 27 18:31:43.782135 systemd-networkd[1453]: cali60e51b789ff: Link UP May 27 18:31:43.782441 systemd-networkd[1453]: cali60e51b789ff: Gained carrier May 27 18:31:43.804278 containerd[1535]: 2025-05-27 18:31:43.650 [INFO][3730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.128.44-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 5f10c39b-5881-4eb1-bc96-05501483b0f7 3339 0 2025-05-27 18:31:43 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 146.190.128.44 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.128.44-k8s-nfs--server--provisioner--0-" May 27 18:31:43.804278 containerd[1535]: 2025-05-27 18:31:43.650 [INFO][3730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.128.44-k8s-nfs--server--provisioner--0-eth0" May 27 18:31:43.804278 containerd[1535]: 2025-05-27 18:31:43.693 [INFO][3742] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" HandleID="k8s-pod-network.260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Workload="146.190.128.44-k8s-nfs--server--provisioner--0-eth0" May 27 18:31:43.804927 containerd[1535]: 2025-05-27 18:31:43.694 [INFO][3742] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" HandleID="k8s-pod-network.260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Workload="146.190.128.44-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9020), Attrs:map[string]string{"namespace":"default", "node":"146.190.128.44", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-27 18:31:43.693835973 +0000 UTC"}, Hostname:"146.190.128.44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:31:43.804927 containerd[1535]: 2025-05-27 18:31:43.694 [INFO][3742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:43.804927 containerd[1535]: 2025-05-27 18:31:43.694 [INFO][3742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:43.804927 containerd[1535]: 2025-05-27 18:31:43.694 [INFO][3742] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.128.44' May 27 18:31:43.804927 containerd[1535]: 2025-05-27 18:31:43.714 [INFO][3742] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" host="146.190.128.44" May 27 18:31:43.804927 containerd[1535]: 2025-05-27 18:31:43.726 [INFO][3742] ipam/ipam.go 394: Looking up existing affinities for host host="146.190.128.44" May 27 18:31:43.804927 containerd[1535]: 2025-05-27 18:31:43.736 [INFO][3742] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="146.190.128.44" May 27 18:31:43.804927 containerd[1535]: 2025-05-27 18:31:43.739 [INFO][3742] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:43.804927 containerd[1535]: 2025-05-27 18:31:43.744 [INFO][3742] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:43.804927 containerd[1535]: 2025-05-27 18:31:43.745 [INFO][3742] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" host="146.190.128.44" May 27 18:31:43.805389 containerd[1535]: 2025-05-27 18:31:43.748 [INFO][3742] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa May 27 18:31:43.805389 containerd[1535]: 2025-05-27 18:31:43.759 [INFO][3742] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" host="146.190.128.44" May 27 18:31:43.805389 containerd[1535]: 2025-05-27 18:31:43.772 [INFO][3742] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.7/26] block=192.168.90.0/26 handle="k8s-pod-network.260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" host="146.190.128.44" May 27 18:31:43.805389 containerd[1535]: 2025-05-27 18:31:43.772 [INFO][3742] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.7/26] handle="k8s-pod-network.260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" host="146.190.128.44" May 27 18:31:43.805389 containerd[1535]: 2025-05-27 18:31:43.772 [INFO][3742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:43.805389 containerd[1535]: 2025-05-27 18:31:43.772 [INFO][3742] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.7/26] IPv6=[] ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" HandleID="k8s-pod-network.260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Workload="146.190.128.44-k8s-nfs--server--provisioner--0-eth0" May 27 18:31:43.805611 containerd[1535]: 2025-05-27 18:31:43.775 [INFO][3730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.128.44-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5f10c39b-5881-4eb1-bc96-05501483b0f7", ResourceVersion:"3339", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 31, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.90.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:43.805611 containerd[1535]: 2025-05-27 18:31:43.775 [INFO][3730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.7/32] ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.128.44-k8s-nfs--server--provisioner--0-eth0" May 27 18:31:43.805611 containerd[1535]: 2025-05-27 18:31:43.776 [INFO][3730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.128.44-k8s-nfs--server--provisioner--0-eth0" May 27 18:31:43.805611 containerd[1535]: 2025-05-27 18:31:43.782 [INFO][3730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.128.44-k8s-nfs--server--provisioner--0-eth0" May 27 18:31:43.805866 containerd[1535]: 2025-05-27 18:31:43.784 [INFO][3730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.128.44-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5f10c39b-5881-4eb1-bc96-05501483b0f7", ResourceVersion:"3339", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 31, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.90.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"9e:ed:15:8e:f1:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:43.805866 containerd[1535]: 2025-05-27 18:31:43.798 [INFO][3730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.128.44-k8s-nfs--server--provisioner--0-eth0" May 27 18:31:43.867798 containerd[1535]: time="2025-05-27T18:31:43.867628404Z" level=info msg="connecting to shim 260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa" address="unix:///run/containerd/s/f1df5bb31097dbf73f765613818afc898ec0493316facd6d9033406fca7154cc" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:43.912342 systemd[1]: Started cri-containerd-260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa.scope - libcontainer container 260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa. May 27 18:31:43.976593 containerd[1535]: time="2025-05-27T18:31:43.976388058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5f10c39b-5881-4eb1-bc96-05501483b0f7,Namespace:default,Attempt:0,} returns sandbox id \"260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa\"" May 27 18:31:43.979629 containerd[1535]: time="2025-05-27T18:31:43.979588721Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 27 18:31:44.138424 kubelet[1929]: E0527 18:31:44.138008 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:45.140519 kubelet[1929]: E0527 18:31:45.139312 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:45.676513 systemd-networkd[1453]: cali60e51b789ff: Gained IPv6LL May 27 18:31:46.140141 kubelet[1929]: E0527 18:31:46.140037 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:47.141107 kubelet[1929]: E0527 18:31:47.141041 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:47.686355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239721751.mount: Deactivated successfully. May 27 18:31:48.141651 kubelet[1929]: E0527 18:31:48.141509 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:49.142257 kubelet[1929]: E0527 18:31:49.142177 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:50.047193 containerd[1535]: time="2025-05-27T18:31:50.047126033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:50.048033 containerd[1535]: time="2025-05-27T18:31:50.047968273Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" May 27 18:31:50.050910 containerd[1535]: time="2025-05-27T18:31:50.050858185Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:50.054019 containerd[1535]: time="2025-05-27T18:31:50.052535634Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.072709261s" May 27 18:31:50.054019 containerd[1535]: time="2025-05-27T18:31:50.052583910Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 27 18:31:50.054019 containerd[1535]: time="2025-05-27T18:31:50.053414059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:50.059726 containerd[1535]: time="2025-05-27T18:31:50.059668644Z" level=info msg="CreateContainer within sandbox \"260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 27 18:31:50.078251 containerd[1535]: time="2025-05-27T18:31:50.077939498Z" level=info msg="Container 34d256af8c01c9c7188f0e94be101325b23e8c62226aaa22b22644f882f55e56: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:50.085276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount102327624.mount: Deactivated successfully. May 27 18:31:50.096542 containerd[1535]: time="2025-05-27T18:31:50.096342074Z" level=info msg="CreateContainer within sandbox \"260e2de1b953ef068ecfbf63c6efb4b03466793881f24a556242ef5c0ceaa1fa\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"34d256af8c01c9c7188f0e94be101325b23e8c62226aaa22b22644f882f55e56\"" May 27 18:31:50.097383 containerd[1535]: time="2025-05-27T18:31:50.097314365Z" level=info msg="StartContainer for \"34d256af8c01c9c7188f0e94be101325b23e8c62226aaa22b22644f882f55e56\"" May 27 18:31:50.099169 containerd[1535]: time="2025-05-27T18:31:50.099072172Z" level=info msg="connecting to shim 34d256af8c01c9c7188f0e94be101325b23e8c62226aaa22b22644f882f55e56" address="unix:///run/containerd/s/f1df5bb31097dbf73f765613818afc898ec0493316facd6d9033406fca7154cc" protocol=ttrpc version=3 May 27 18:31:50.128273 systemd[1]: Started cri-containerd-34d256af8c01c9c7188f0e94be101325b23e8c62226aaa22b22644f882f55e56.scope - libcontainer container 34d256af8c01c9c7188f0e94be101325b23e8c62226aaa22b22644f882f55e56. May 27 18:31:50.143302 kubelet[1929]: E0527 18:31:50.143245 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:50.169679 containerd[1535]: time="2025-05-27T18:31:50.169639815Z" level=info msg="StartContainer for \"34d256af8c01c9c7188f0e94be101325b23e8c62226aaa22b22644f882f55e56\" returns successfully" May 27 18:31:50.607555 kubelet[1929]: I0527 18:31:50.607442 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.530345193 podStartE2EDuration="7.607067079s" podCreationTimestamp="2025-05-27 18:31:43 +0000 UTC" firstStartedPulling="2025-05-27 18:31:43.978674556 +0000 UTC m=+47.697877547" lastFinishedPulling="2025-05-27 18:31:50.055396429 +0000 UTC m=+53.774599433" observedRunningTime="2025-05-27 18:31:50.604763709 +0000 UTC m=+54.323966715" watchObservedRunningTime="2025-05-27 18:31:50.607067079 +0000 UTC m=+54.326270094" May 27 18:31:51.144709 kubelet[1929]: E0527 18:31:51.144633 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:52.145332 kubelet[1929]: E0527 18:31:52.145265 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:52.254065 kubelet[1929]: E0527 18:31:52.253921 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l2hf9" podUID="923329c2-959f-4193-b2be-f3dbcc05c0db" May 27 18:31:53.147175 kubelet[1929]: E0527 18:31:53.147110 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:54.147636 kubelet[1929]: E0527 18:31:54.147561 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:55.148650 kubelet[1929]: E0527 18:31:55.148552 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:55.464962 systemd[1]: Created slice kubepods-besteffort-pod4215e3be_df10_4fa9_b281_ad2b84a11cac.slice - libcontainer container kubepods-besteffort-pod4215e3be_df10_4fa9_b281_ad2b84a11cac.slice. May 27 18:31:55.537594 kubelet[1929]: I0527 18:31:55.537179 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mktz4\" (UniqueName: \"kubernetes.io/projected/4215e3be-df10-4fa9-b281-ad2b84a11cac-kube-api-access-mktz4\") pod \"test-pod-1\" (UID: \"4215e3be-df10-4fa9-b281-ad2b84a11cac\") " pod="default/test-pod-1" May 27 18:31:55.537594 kubelet[1929]: I0527 18:31:55.537247 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4adcf809-b346-492d-afc7-90e0860ec434\" (UniqueName: \"kubernetes.io/nfs/4215e3be-df10-4fa9-b281-ad2b84a11cac-pvc-4adcf809-b346-492d-afc7-90e0860ec434\") pod \"test-pod-1\" (UID: \"4215e3be-df10-4fa9-b281-ad2b84a11cac\") " pod="default/test-pod-1" May 27 18:31:55.696188 kernel: netfs: FS-Cache loaded May 27 18:31:55.794142 kernel: RPC: Registered named UNIX socket transport module. May 27 18:31:55.794296 kernel: RPC: Registered udp transport module. May 27 18:31:55.794317 kernel: RPC: Registered tcp transport module. May 27 18:31:55.794335 kernel: RPC: Registered tcp-with-tls transport module. May 27 18:31:55.795040 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 27 18:31:56.096022 kernel: NFS: Registering the id_resolver key type May 27 18:31:56.096127 kernel: Key type id_resolver registered May 27 18:31:56.096148 kernel: Key type id_legacy registered May 27 18:31:56.144884 nfsidmap[3924]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '0.0-1-cb46b2958a' May 27 18:31:56.149763 kubelet[1929]: E0527 18:31:56.149689 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:56.152012 nfsidmap[3925]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '0.0-1-cb46b2958a' May 27 18:31:56.174275 nfsrahead[3928]: setting /var/lib/kubelet/pods/4215e3be-df10-4fa9-b281-ad2b84a11cac/volumes/kubernetes.io~nfs/pvc-4adcf809-b346-492d-afc7-90e0860ec434 readahead to 128 May 27 18:31:56.370360 containerd[1535]: time="2025-05-27T18:31:56.370298491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4215e3be-df10-4fa9-b281-ad2b84a11cac,Namespace:default,Attempt:0,}" May 27 18:31:56.529697 systemd-networkd[1453]: cali5ec59c6bf6e: Link UP May 27 18:31:56.530768 systemd-networkd[1453]: cali5ec59c6bf6e: Gained carrier May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.428 [INFO][3930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.128.44-k8s-test--pod--1-eth0 default 4215e3be-df10-4fa9-b281-ad2b84a11cac 3415 0 2025-05-27 18:31:43 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 146.190.128.44 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.128.44-k8s-test--pod--1-" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.428 [INFO][3930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.128.44-k8s-test--pod--1-eth0" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.466 [INFO][3942] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" HandleID="k8s-pod-network.3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Workload="146.190.128.44-k8s-test--pod--1-eth0" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.466 [INFO][3942] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" HandleID="k8s-pod-network.3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Workload="146.190.128.44-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233670), Attrs:map[string]string{"namespace":"default", "node":"146.190.128.44", "pod":"test-pod-1", "timestamp":"2025-05-27 18:31:56.466414575 +0000 UTC"}, Hostname:"146.190.128.44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.466 [INFO][3942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.467 [INFO][3942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.467 [INFO][3942] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.128.44' May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.477 [INFO][3942] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" host="146.190.128.44" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.487 [INFO][3942] ipam/ipam.go 394: Looking up existing affinities for host host="146.190.128.44" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.497 [INFO][3942] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="146.190.128.44" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.500 [INFO][3942] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.504 [INFO][3942] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="146.190.128.44" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.505 [INFO][3942] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" host="146.190.128.44" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.507 [INFO][3942] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999 May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.513 [INFO][3942] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" host="146.190.128.44" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.522 [INFO][3942] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.8/26] block=192.168.90.0/26 handle="k8s-pod-network.3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" host="146.190.128.44" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.522 [INFO][3942] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.8/26] handle="k8s-pod-network.3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" host="146.190.128.44" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.522 [INFO][3942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.522 [INFO][3942] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.8/26] IPv6=[] ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" HandleID="k8s-pod-network.3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Workload="146.190.128.44-k8s-test--pod--1-eth0" May 27 18:31:56.549021 containerd[1535]: 2025-05-27 18:31:56.524 [INFO][3930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.128.44-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4215e3be-df10-4fa9-b281-ad2b84a11cac", ResourceVersion:"3415", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 31, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:56.549783 containerd[1535]: 2025-05-27 18:31:56.525 [INFO][3930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.8/32] ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.128.44-k8s-test--pod--1-eth0" May 27 18:31:56.549783 containerd[1535]: 2025-05-27 18:31:56.525 [INFO][3930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.128.44-k8s-test--pod--1-eth0" May 27 18:31:56.549783 containerd[1535]: 2025-05-27 18:31:56.531 [INFO][3930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.128.44-k8s-test--pod--1-eth0" May 27 18:31:56.549783 containerd[1535]: 2025-05-27 18:31:56.531 [INFO][3930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.128.44-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.128.44-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4215e3be-df10-4fa9-b281-ad2b84a11cac", ResourceVersion:"3415", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 31, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.128.44", ContainerID:"3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"56:60:50:c0:5d:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:31:56.549783 containerd[1535]: 2025-05-27 18:31:56.543 [INFO][3930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.128.44-k8s-test--pod--1-eth0" May 27 18:31:56.591016 containerd[1535]: time="2025-05-27T18:31:56.590915414Z" level=info msg="connecting to shim 3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999" address="unix:///run/containerd/s/a8c47ec6823f6e53e6f0e94d0e97c3d4701e70e15e116aec2359f3ba9f1861f0" namespace=k8s.io protocol=ttrpc version=3 May 27 18:31:56.632583 systemd[1]: Started cri-containerd-3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999.scope - libcontainer container 3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999. May 27 18:31:56.718423 containerd[1535]: time="2025-05-27T18:31:56.718337946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4215e3be-df10-4fa9-b281-ad2b84a11cac,Namespace:default,Attempt:0,} returns sandbox id \"3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999\"" May 27 18:31:56.721538 containerd[1535]: time="2025-05-27T18:31:56.721198588Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 27 18:31:56.846762 kubelet[1929]: I0527 18:31:56.846388 1929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:31:57.086316 kubelet[1929]: E0527 18:31:57.086254 1929 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:57.110724 containerd[1535]: time="2025-05-27T18:31:57.110510091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2094ae70c6e5addfa265abc976463d361910ea0e81967e8ba847365e97effc37\" id:\"e6259501e8ba51f0c7d8ae90f89b8806463c6a3cb0db5c993b52deb4c837cf0b\" pid:4018 exited_at:{seconds:1748370717 nanos:109524680}" May 27 18:31:57.134155 containerd[1535]: time="2025-05-27T18:31:57.134097998Z" level=info msg="StopPodSandbox for \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\"" May 27 18:31:57.145020 containerd[1535]: time="2025-05-27T18:31:57.144303446Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 27 18:31:57.145217 containerd[1535]: time="2025-05-27T18:31:57.145120698Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:31:57.147862 containerd[1535]: time="2025-05-27T18:31:57.147803792Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:93ad19b5b847f64ffb1df64c55e6da69a9ea1c9c00af759cc5d1851adf649cad\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d05f253bbd7e7775260835f038c9a389140350699c88c7f0fbbb44a44db71668\", size \"73307995\" in 426.558471ms" May 27 18:31:57.148210 containerd[1535]: time="2025-05-27T18:31:57.147875866Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:93ad19b5b847f64ffb1df64c55e6da69a9ea1c9c00af759cc5d1851adf649cad\"" May 27 18:31:57.153057 kubelet[1929]: E0527 18:31:57.150560 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:57.157315 containerd[1535]: time="2025-05-27T18:31:57.157261600Z" level=info msg="CreateContainer within sandbox \"3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 27 18:31:57.177470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046405798.mount: Deactivated successfully. May 27 18:31:57.183137 containerd[1535]: time="2025-05-27T18:31:57.180148764Z" level=info msg="Container fc341aea226741d27b8d068a41a826b45ce8ece492f827b0a903b798c389b112: CDI devices from CRI Config.CDIDevices: []" May 27 18:31:57.191963 containerd[1535]: time="2025-05-27T18:31:57.191819332Z" level=info msg="CreateContainer within sandbox \"3b7dca9fd025a97aa023e082c004770b3b113fee71aef825f695ccd58b5c7999\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"fc341aea226741d27b8d068a41a826b45ce8ece492f827b0a903b798c389b112\"" May 27 18:31:57.194043 containerd[1535]: time="2025-05-27T18:31:57.193033311Z" level=info msg="StartContainer for \"fc341aea226741d27b8d068a41a826b45ce8ece492f827b0a903b798c389b112\"" May 27 18:31:57.194604 containerd[1535]: time="2025-05-27T18:31:57.194559962Z" level=info msg="connecting to shim fc341aea226741d27b8d068a41a826b45ce8ece492f827b0a903b798c389b112" address="unix:///run/containerd/s/a8c47ec6823f6e53e6f0e94d0e97c3d4701e70e15e116aec2359f3ba9f1861f0" protocol=ttrpc version=3 May 27 18:31:57.233284 systemd[1]: Started cri-containerd-fc341aea226741d27b8d068a41a826b45ce8ece492f827b0a903b798c389b112.scope - libcontainer container fc341aea226741d27b8d068a41a826b45ce8ece492f827b0a903b798c389b112. May 27 18:31:57.312955 containerd[1535]: time="2025-05-27T18:31:57.312877878Z" level=info msg="StartContainer for \"fc341aea226741d27b8d068a41a826b45ce8ece492f827b0a903b798c389b112\" returns successfully" May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.223 [WARNING][4038] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" WorkloadEndpoint="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.223 [INFO][4038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.223 [INFO][4038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" iface="eth0" netns="" May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.223 [INFO][4038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.223 [INFO][4038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.299 [INFO][4057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.299 [INFO][4057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.299 [INFO][4057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.320 [WARNING][4057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.321 [INFO][4057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.323 [INFO][4057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:57.328917 containerd[1535]: 2025-05-27 18:31:57.326 [INFO][4038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:57.330319 containerd[1535]: time="2025-05-27T18:31:57.329487984Z" level=info msg="TearDown network for sandbox \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\" successfully" May 27 18:31:57.330319 containerd[1535]: time="2025-05-27T18:31:57.329546453Z" level=info msg="StopPodSandbox for \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\" returns successfully" May 27 18:31:57.330970 containerd[1535]: time="2025-05-27T18:31:57.330940363Z" level=info msg="RemovePodSandbox for \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\"" May 27 18:31:57.338801 containerd[1535]: time="2025-05-27T18:31:57.338736668Z" level=info msg="Forcibly stopping sandbox \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\"" May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.402 [WARNING][4109] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" WorkloadEndpoint="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.402 [INFO][4109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.402 [INFO][4109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" iface="eth0" netns="" May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.402 [INFO][4109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.402 [INFO][4109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.437 [INFO][4120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.437 [INFO][4120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.437 [INFO][4120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.457 [WARNING][4120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.457 [INFO][4120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" HandleID="k8s-pod-network.f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" Workload="146.190.128.44-k8s-whisker--55bcb9dc75--47rc8-eth0" May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.460 [INFO][4120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:31:57.465171 containerd[1535]: 2025-05-27 18:31:57.462 [INFO][4109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee" May 27 18:31:57.466975 containerd[1535]: time="2025-05-27T18:31:57.465275690Z" level=info msg="TearDown network for sandbox \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\" successfully" May 27 18:31:57.475163 containerd[1535]: time="2025-05-27T18:31:57.475088016Z" level=info msg="Ensure that sandbox f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee in task-service has been cleanup successfully" May 27 18:31:57.478813 containerd[1535]: time="2025-05-27T18:31:57.478718461Z" level=info msg="RemovePodSandbox \"f5f860a6a39ec88ea26c1e891c395a798bc4eac09335ddd70b7aba7fdf06ffee\" returns successfully" May 27 18:31:58.151191 kubelet[1929]: E0527 18:31:58.151115 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:31:58.220848 systemd-networkd[1453]: cali5ec59c6bf6e: Gained IPv6LL May 27 18:31:59.152061 kubelet[1929]: E0527 18:31:59.151955 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:32:00.153141 kubelet[1929]: E0527 18:32:00.153071 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:32:01.154159 kubelet[1929]: E0527 18:32:01.154095 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:32:02.155039 kubelet[1929]: E0527 18:32:02.154903 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:32:03.155281 kubelet[1929]: E0527 18:32:03.155195 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 27 18:32:03.254392 containerd[1535]: time="2025-05-27T18:32:03.254346939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:32:03.274570 kubelet[1929]: I0527 18:32:03.274464 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.843203182 podStartE2EDuration="20.274446269s" podCreationTimestamp="2025-05-27 18:31:43 +0000 UTC" firstStartedPulling="2025-05-27 18:31:56.720641103 +0000 UTC m=+60.439844096" lastFinishedPulling="2025-05-27 18:31:57.151884194 +0000 UTC m=+60.871087183" observedRunningTime="2025-05-27 18:31:57.615324065 +0000 UTC m=+61.334527077" watchObservedRunningTime="2025-05-27 18:32:03.274446269 +0000 UTC m=+66.993649279" May 27 18:32:03.640603 containerd[1535]: time="2025-05-27T18:32:03.640518701Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:32:03.641581 containerd[1535]: time="2025-05-27T18:32:03.641456704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:32:03.641843 containerd[1535]: time="2025-05-27T18:32:03.641530730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:32:03.641912 kubelet[1929]: E0527 18:32:03.641800 1929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:32:03.641912 kubelet[1929]: E0527 18:32:03.641857 1929 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:32:03.642117 kubelet[1929]: E0527 18:32:03.642065 1929 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2wlwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-l2hf9_calico-system(923329c2-959f-4193-b2be-f3dbcc05c0db): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:32:03.643788 kubelet[1929]: E0527 18:32:03.643695 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l2hf9" podUID="923329c2-959f-4193-b2be-f3dbcc05c0db"