May 16 02:18:32.100179 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:08:20 -00 2025 May 16 02:18:32.100273 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 02:18:32.100301 kernel: BIOS-provided physical RAM map: May 16 02:18:32.100321 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 16 02:18:32.100339 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 16 02:18:32.100363 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 16 02:18:32.100385 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 16 02:18:32.100405 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 16 02:18:32.100424 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 02:18:32.100443 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 16 02:18:32.100462 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 16 02:18:32.100481 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 02:18:32.100500 kernel: NX (Execute Disable) protection: active May 16 02:18:32.100520 kernel: APIC: Static calls initialized May 16 02:18:32.100547 kernel: SMBIOS 3.0.0 present. May 16 02:18:32.100568 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 16 02:18:32.100588 kernel: Hypervisor detected: KVM May 16 02:18:32.100607 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 02:18:32.100627 kernel: kvm-clock: using sched offset of 3709223506 cycles May 16 02:18:32.100648 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 02:18:32.100673 kernel: tsc: Detected 1996.249 MHz processor May 16 02:18:32.100695 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 02:18:32.100717 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 02:18:32.100738 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 16 02:18:32.100759 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 16 02:18:32.100780 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 02:18:32.100801 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 16 02:18:32.100822 kernel: ACPI: Early table checksum verification disabled May 16 02:18:32.100846 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 16 02:18:32.100867 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 02:18:32.100888 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 02:18:32.100908 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 02:18:32.100928 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 16 02:18:32.100949 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 02:18:32.100969 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 02:18:32.100990 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 16 02:18:32.101010 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 16 02:18:32.101035 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 16 02:18:32.101056 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 16 02:18:32.101077 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 16 02:18:32.101104 kernel: No NUMA configuration found May 16 02:18:32.101126 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 16 02:18:32.101147 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 16 02:18:32.101169 kernel: Zone ranges: May 16 02:18:32.101195 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 02:18:32.101216 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 16 02:18:32.102281 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 16 02:18:32.102322 kernel: Movable zone start for each node May 16 02:18:32.102344 kernel: Early memory node ranges May 16 02:18:32.102366 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 16 02:18:32.102387 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 16 02:18:32.102409 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 16 02:18:32.102437 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 16 02:18:32.102459 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 02:18:32.102480 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 16 02:18:32.102502 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 16 02:18:32.102523 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 02:18:32.102545 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 02:18:32.102567 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 02:18:32.102588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 02:18:32.102610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 02:18:32.102636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 02:18:32.102657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 02:18:32.102679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 02:18:32.102700 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 02:18:32.102721 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 16 02:18:32.102743 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 02:18:32.102764 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 16 02:18:32.102785 kernel: Booting paravirtualized kernel on KVM May 16 02:18:32.102807 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 02:18:32.102834 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 16 02:18:32.102855 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 16 02:18:32.102876 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 16 02:18:32.102898 kernel: pcpu-alloc: [0] 0 1 May 16 02:18:32.102949 kernel: kvm-guest: PV spinlocks disabled, no host support May 16 02:18:32.102974 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 02:18:32.102998 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 02:18:32.103019 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 02:18:32.103047 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 02:18:32.103069 kernel: Fallback order for Node 0: 0 May 16 02:18:32.103090 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 16 02:18:32.103111 kernel: Policy zone: Normal May 16 02:18:32.103133 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 02:18:32.103154 kernel: software IO TLB: area num 2. May 16 02:18:32.103176 kernel: Memory: 3962108K/4193772K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43600K init, 1472K bss, 231404K reserved, 0K cma-reserved) May 16 02:18:32.103198 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 16 02:18:32.103219 kernel: ftrace: allocating 37997 entries in 149 pages May 16 02:18:32.103331 kernel: ftrace: allocated 149 pages with 4 groups May 16 02:18:32.103354 kernel: Dynamic Preempt: voluntary May 16 02:18:32.103376 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 02:18:32.103399 kernel: rcu: RCU event tracing is enabled. May 16 02:18:32.103421 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 16 02:18:32.103443 kernel: Trampoline variant of Tasks RCU enabled. May 16 02:18:32.103465 kernel: Rude variant of Tasks RCU enabled. May 16 02:18:32.103486 kernel: Tracing variant of Tasks RCU enabled. May 16 02:18:32.103508 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 02:18:32.103536 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 16 02:18:32.103557 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 16 02:18:32.103578 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 02:18:32.103600 kernel: Console: colour VGA+ 80x25 May 16 02:18:32.103621 kernel: printk: console [tty0] enabled May 16 02:18:32.103642 kernel: printk: console [ttyS0] enabled May 16 02:18:32.103663 kernel: ACPI: Core revision 20230628 May 16 02:18:32.103685 kernel: APIC: Switch to symmetric I/O mode setup May 16 02:18:32.103701 kernel: x2apic enabled May 16 02:18:32.103721 kernel: APIC: Switched APIC routing to: physical x2apic May 16 02:18:32.103737 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 02:18:32.103753 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 02:18:32.103769 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 16 02:18:32.103785 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 16 02:18:32.103801 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 16 02:18:32.103817 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 02:18:32.103833 kernel: Spectre V2 : Mitigation: Retpolines May 16 02:18:32.103849 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 02:18:32.103868 kernel: Speculative Store Bypass: Vulnerable May 16 02:18:32.103885 kernel: x86/fpu: x87 FPU will use FXSAVE May 16 02:18:32.103901 kernel: Freeing SMP alternatives memory: 32K May 16 02:18:32.103917 kernel: pid_max: default: 32768 minimum: 301 May 16 02:18:32.103944 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 02:18:32.103965 kernel: landlock: Up and running. May 16 02:18:32.103982 kernel: SELinux: Initializing. May 16 02:18:32.103999 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 02:18:32.104015 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 02:18:32.104033 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 16 02:18:32.104050 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 02:18:32.104068 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 02:18:32.104088 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 02:18:32.104105 kernel: Performance Events: AMD PMU driver. May 16 02:18:32.104122 kernel: ... version: 0 May 16 02:18:32.104139 kernel: ... bit width: 48 May 16 02:18:32.104232 kernel: ... generic registers: 4 May 16 02:18:32.105314 kernel: ... value mask: 0000ffffffffffff May 16 02:18:32.105333 kernel: ... max period: 00007fffffffffff May 16 02:18:32.105350 kernel: ... fixed-purpose events: 0 May 16 02:18:32.105367 kernel: ... event mask: 000000000000000f May 16 02:18:32.105384 kernel: signal: max sigframe size: 1440 May 16 02:18:32.105401 kernel: rcu: Hierarchical SRCU implementation. May 16 02:18:32.105419 kernel: rcu: Max phase no-delay instances is 400. May 16 02:18:32.105436 kernel: smp: Bringing up secondary CPUs ... May 16 02:18:32.105452 kernel: smpboot: x86: Booting SMP configuration: May 16 02:18:32.105475 kernel: .... node #0, CPUs: #1 May 16 02:18:32.105492 kernel: smp: Brought up 1 node, 2 CPUs May 16 02:18:32.105509 kernel: smpboot: Max logical packages: 2 May 16 02:18:32.105526 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 16 02:18:32.105542 kernel: devtmpfs: initialized May 16 02:18:32.105559 kernel: x86/mm: Memory block size: 128MB May 16 02:18:32.105576 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 02:18:32.105593 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 16 02:18:32.105610 kernel: pinctrl core: initialized pinctrl subsystem May 16 02:18:32.105630 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 02:18:32.105647 kernel: audit: initializing netlink subsys (disabled) May 16 02:18:32.105664 kernel: audit: type=2000 audit(1747361911.434:1): state=initialized audit_enabled=0 res=1 May 16 02:18:32.105681 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 02:18:32.105697 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 02:18:32.105714 kernel: cpuidle: using governor menu May 16 02:18:32.105731 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 02:18:32.105748 kernel: dca service started, version 1.12.1 May 16 02:18:32.105764 kernel: PCI: Using configuration type 1 for base access May 16 02:18:32.105785 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 02:18:32.105802 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 02:18:32.105819 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 02:18:32.105835 kernel: ACPI: Added _OSI(Module Device) May 16 02:18:32.105852 kernel: ACPI: Added _OSI(Processor Device) May 16 02:18:32.105869 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 02:18:32.105885 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 02:18:32.105902 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 02:18:32.105919 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 02:18:32.105939 kernel: ACPI: Interpreter enabled May 16 02:18:32.105955 kernel: ACPI: PM: (supports S0 S3 S5) May 16 02:18:32.105972 kernel: ACPI: Using IOAPIC for interrupt routing May 16 02:18:32.105989 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 02:18:32.106006 kernel: PCI: Using E820 reservations for host bridge windows May 16 02:18:32.106022 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 16 02:18:32.106039 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 02:18:32.106384 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 16 02:18:32.106585 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 16 02:18:32.106760 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 16 02:18:32.106785 kernel: acpiphp: Slot [3] registered May 16 02:18:32.106803 kernel: acpiphp: Slot [4] registered May 16 02:18:32.106820 kernel: acpiphp: Slot [5] registered May 16 02:18:32.106836 kernel: acpiphp: Slot [6] registered May 16 02:18:32.106853 kernel: acpiphp: Slot [7] registered May 16 02:18:32.106870 kernel: acpiphp: Slot [8] registered May 16 02:18:32.106892 kernel: acpiphp: Slot [9] registered May 16 02:18:32.106908 kernel: acpiphp: Slot [10] registered May 16 02:18:32.106925 kernel: acpiphp: Slot [11] registered May 16 02:18:32.106941 kernel: acpiphp: Slot [12] registered May 16 02:18:32.106958 kernel: acpiphp: Slot [13] registered May 16 02:18:32.106974 kernel: acpiphp: Slot [14] registered May 16 02:18:32.106991 kernel: acpiphp: Slot [15] registered May 16 02:18:32.107007 kernel: acpiphp: Slot [16] registered May 16 02:18:32.107023 kernel: acpiphp: Slot [17] registered May 16 02:18:32.107040 kernel: acpiphp: Slot [18] registered May 16 02:18:32.107060 kernel: acpiphp: Slot [19] registered May 16 02:18:32.107076 kernel: acpiphp: Slot [20] registered May 16 02:18:32.107093 kernel: acpiphp: Slot [21] registered May 16 02:18:32.107109 kernel: acpiphp: Slot [22] registered May 16 02:18:32.107125 kernel: acpiphp: Slot [23] registered May 16 02:18:32.107142 kernel: acpiphp: Slot [24] registered May 16 02:18:32.107158 kernel: acpiphp: Slot [25] registered May 16 02:18:32.107175 kernel: acpiphp: Slot [26] registered May 16 02:18:32.107191 kernel: acpiphp: Slot [27] registered May 16 02:18:32.108273 kernel: acpiphp: Slot [28] registered May 16 02:18:32.108285 kernel: acpiphp: Slot [29] registered May 16 02:18:32.108294 kernel: acpiphp: Slot [30] registered May 16 02:18:32.108303 kernel: acpiphp: Slot [31] registered May 16 02:18:32.108312 kernel: PCI host bridge to bus 0000:00 May 16 02:18:32.108424 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 02:18:32.108514 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 02:18:32.108605 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 02:18:32.108695 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 02:18:32.108780 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 16 02:18:32.108866 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 02:18:32.108981 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 16 02:18:32.109090 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 16 02:18:32.109194 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 16 02:18:32.109344 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 16 02:18:32.109441 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 16 02:18:32.109536 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 16 02:18:32.109631 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 16 02:18:32.109724 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 16 02:18:32.109825 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 16 02:18:32.109920 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 16 02:18:32.110022 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 16 02:18:32.110124 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 16 02:18:32.110233 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 16 02:18:32.111376 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 16 02:18:32.111480 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 16 02:18:32.111582 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 16 02:18:32.111685 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 02:18:32.111802 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 16 02:18:32.111901 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 16 02:18:32.111996 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 16 02:18:32.112092 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 16 02:18:32.112221 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 16 02:18:32.112806 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 16 02:18:32.112905 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 16 02:18:32.113007 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 16 02:18:32.113103 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 16 02:18:32.113204 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 16 02:18:32.113336 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 16 02:18:32.113432 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 16 02:18:32.113534 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 16 02:18:32.113630 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 16 02:18:32.113730 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 16 02:18:32.113824 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 16 02:18:32.113839 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 02:18:32.113848 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 02:18:32.113858 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 02:18:32.113867 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 02:18:32.113876 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 16 02:18:32.113885 kernel: iommu: Default domain type: Translated May 16 02:18:32.113898 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 02:18:32.113908 kernel: PCI: Using ACPI for IRQ routing May 16 02:18:32.113917 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 02:18:32.113926 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 16 02:18:32.113935 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 16 02:18:32.114028 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 16 02:18:32.114122 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 16 02:18:32.114232 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 02:18:32.114283 kernel: vgaarb: loaded May 16 02:18:32.114297 kernel: clocksource: Switched to clocksource kvm-clock May 16 02:18:32.114306 kernel: VFS: Disk quotas dquot_6.6.0 May 16 02:18:32.114315 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 02:18:32.114325 kernel: pnp: PnP ACPI init May 16 02:18:32.114433 kernel: pnp 00:03: [dma 2] May 16 02:18:32.114448 kernel: pnp: PnP ACPI: found 5 devices May 16 02:18:32.114458 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 02:18:32.114467 kernel: NET: Registered PF_INET protocol family May 16 02:18:32.114480 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 02:18:32.114489 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 02:18:32.114498 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 02:18:32.114508 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 02:18:32.114517 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 02:18:32.114526 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 02:18:32.114535 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 02:18:32.114544 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 02:18:32.114554 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 02:18:32.114565 kernel: NET: Registered PF_XDP protocol family May 16 02:18:32.114650 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 02:18:32.114733 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 02:18:32.114815 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 02:18:32.114897 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 16 02:18:32.114980 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 16 02:18:32.115080 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 16 02:18:32.115181 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 16 02:18:32.115199 kernel: PCI: CLS 0 bytes, default 64 May 16 02:18:32.115210 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 16 02:18:32.115220 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 16 02:18:32.115229 kernel: Initialise system trusted keyrings May 16 02:18:32.115269 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 02:18:32.115280 kernel: Key type asymmetric registered May 16 02:18:32.115289 kernel: Asymmetric key parser 'x509' registered May 16 02:18:32.115299 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 02:18:32.115312 kernel: io scheduler mq-deadline registered May 16 02:18:32.115321 kernel: io scheduler kyber registered May 16 02:18:32.115331 kernel: io scheduler bfq registered May 16 02:18:32.115341 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 02:18:32.115351 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 16 02:18:32.115361 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 16 02:18:32.115371 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 16 02:18:32.115381 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 16 02:18:32.115391 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 02:18:32.115401 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 02:18:32.115413 kernel: random: crng init done May 16 02:18:32.115422 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 02:18:32.115432 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 02:18:32.115442 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 02:18:32.115544 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 02:18:32.115561 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 02:18:32.115650 kernel: rtc_cmos 00:04: registered as rtc0 May 16 02:18:32.115745 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T02:18:31 UTC (1747361911) May 16 02:18:32.115835 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 02:18:32.115849 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 02:18:32.115858 kernel: NET: Registered PF_INET6 protocol family May 16 02:18:32.115868 kernel: Segment Routing with IPv6 May 16 02:18:32.115877 kernel: In-situ OAM (IOAM) with IPv6 May 16 02:18:32.115886 kernel: NET: Registered PF_PACKET protocol family May 16 02:18:32.115895 kernel: Key type dns_resolver registered May 16 02:18:32.115904 kernel: IPI shorthand broadcast: enabled May 16 02:18:32.115913 kernel: sched_clock: Marking stable (991008141, 169691494)->(1192736487, -32036852) May 16 02:18:32.115931 kernel: registered taskstats version 1 May 16 02:18:32.115940 kernel: Loading compiled-in X.509 certificates May 16 02:18:32.115950 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 36d9e3bf63b9b28466bcfa7a508d814673a33a26' May 16 02:18:32.115959 kernel: Key type .fscrypt registered May 16 02:18:32.115968 kernel: Key type fscrypt-provisioning registered May 16 02:18:32.115977 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 02:18:32.115986 kernel: ima: Allocated hash algorithm: sha1 May 16 02:18:32.115995 kernel: ima: No architecture policies found May 16 02:18:32.116006 kernel: clk: Disabling unused clocks May 16 02:18:32.116015 kernel: Freeing unused kernel image (initmem) memory: 43600K May 16 02:18:32.116024 kernel: Write protecting the kernel read-only data: 40960k May 16 02:18:32.116033 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 16 02:18:32.116042 kernel: Run /init as init process May 16 02:18:32.116051 kernel: with arguments: May 16 02:18:32.116060 kernel: /init May 16 02:18:32.116069 kernel: with environment: May 16 02:18:32.116078 kernel: HOME=/ May 16 02:18:32.116089 kernel: TERM=linux May 16 02:18:32.116097 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 02:18:32.116108 systemd[1]: Successfully made /usr/ read-only. May 16 02:18:32.116121 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 02:18:32.116132 systemd[1]: Detected virtualization kvm. May 16 02:18:32.116142 systemd[1]: Detected architecture x86-64. May 16 02:18:32.116151 systemd[1]: Running in initrd. May 16 02:18:32.116162 systemd[1]: No hostname configured, using default hostname. May 16 02:18:32.116172 systemd[1]: Hostname set to . May 16 02:18:32.116182 systemd[1]: Initializing machine ID from VM UUID. May 16 02:18:32.116192 systemd[1]: Queued start job for default target initrd.target. May 16 02:18:32.116202 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 02:18:32.116211 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 02:18:32.116222 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 02:18:32.116282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 02:18:32.116295 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 02:18:32.116306 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 02:18:32.116317 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 02:18:32.116328 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 02:18:32.116338 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 02:18:32.116350 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 02:18:32.116360 systemd[1]: Reached target paths.target - Path Units. May 16 02:18:32.116370 systemd[1]: Reached target slices.target - Slice Units. May 16 02:18:32.116380 systemd[1]: Reached target swap.target - Swaps. May 16 02:18:32.116390 systemd[1]: Reached target timers.target - Timer Units. May 16 02:18:32.116400 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 02:18:32.116410 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 02:18:32.116420 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 02:18:32.116432 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 02:18:32.116442 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 02:18:32.116452 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 02:18:32.116462 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 02:18:32.116472 systemd[1]: Reached target sockets.target - Socket Units. May 16 02:18:32.116482 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 02:18:32.116492 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 02:18:32.116502 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 02:18:32.116512 systemd[1]: Starting systemd-fsck-usr.service... May 16 02:18:32.116524 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 02:18:32.116534 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 02:18:32.116544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 02:18:32.116554 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 02:18:32.116564 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 02:18:32.116575 systemd[1]: Finished systemd-fsck-usr.service. May 16 02:18:32.116609 systemd-journald[183]: Collecting audit messages is disabled. May 16 02:18:32.116635 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 02:18:32.116649 systemd-journald[183]: Journal started May 16 02:18:32.116672 systemd-journald[183]: Runtime Journal (/run/log/journal/be562696422e4b5098112c1c5807b187) is 8M, max 78.2M, 70.2M free. May 16 02:18:32.102966 systemd-modules-load[185]: Inserted module 'overlay' May 16 02:18:32.157227 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 02:18:32.157264 kernel: Bridge firewalling registered May 16 02:18:32.138109 systemd-modules-load[185]: Inserted module 'br_netfilter' May 16 02:18:32.162252 systemd[1]: Started systemd-journald.service - Journal Service. May 16 02:18:32.163025 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 02:18:32.163722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 02:18:32.166454 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 02:18:32.168155 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 02:18:32.171380 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 02:18:32.172519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 02:18:32.175376 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 02:18:32.190157 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 02:18:32.192432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 02:18:32.197354 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 02:18:32.198100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 02:18:32.201277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 02:18:32.204264 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 02:18:32.218272 dracut-cmdline[220]: dracut-dracut-053 May 16 02:18:32.219491 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 02:18:32.241674 systemd-resolved[214]: Positive Trust Anchors: May 16 02:18:32.242393 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 02:18:32.243139 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 02:18:32.248892 systemd-resolved[214]: Defaulting to hostname 'linux'. May 16 02:18:32.250269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 02:18:32.250799 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 02:18:32.282265 kernel: SCSI subsystem initialized May 16 02:18:32.293297 kernel: Loading iSCSI transport class v2.0-870. May 16 02:18:32.305309 kernel: iscsi: registered transport (tcp) May 16 02:18:32.327719 kernel: iscsi: registered transport (qla4xxx) May 16 02:18:32.327782 kernel: QLogic iSCSI HBA Driver May 16 02:18:32.380859 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 02:18:32.383488 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 02:18:32.444944 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 02:18:32.445027 kernel: device-mapper: uevent: version 1.0.3 May 16 02:18:32.449268 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 02:18:32.508445 kernel: raid6: sse2x4 gen() 5207 MB/s May 16 02:18:32.527307 kernel: raid6: sse2x2 gen() 5998 MB/s May 16 02:18:32.545712 kernel: raid6: sse2x1 gen() 9049 MB/s May 16 02:18:32.545745 kernel: raid6: using algorithm sse2x1 gen() 9049 MB/s May 16 02:18:32.564631 kernel: raid6: .... xor() 7321 MB/s, rmw enabled May 16 02:18:32.564679 kernel: raid6: using ssse3x2 recovery algorithm May 16 02:18:32.587563 kernel: xor: measuring software checksum speed May 16 02:18:32.587618 kernel: prefetch64-sse : 18518 MB/sec May 16 02:18:32.588842 kernel: generic_sse : 16854 MB/sec May 16 02:18:32.588896 kernel: xor: using function: prefetch64-sse (18518 MB/sec) May 16 02:18:32.763312 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 02:18:32.779916 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 02:18:32.785593 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 02:18:32.812281 systemd-udevd[405]: Using default interface naming scheme 'v255'. May 16 02:18:32.817053 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 02:18:32.824168 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 02:18:32.852313 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation May 16 02:18:32.892196 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 02:18:32.896703 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 02:18:32.953625 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 02:18:32.957854 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 02:18:33.002767 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 02:18:33.004959 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 02:18:33.008128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 02:18:33.011527 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 02:18:33.015776 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 02:18:33.040260 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 02:18:33.054262 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 16 02:18:33.069601 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 16 02:18:33.079770 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 02:18:33.079803 kernel: GPT:17805311 != 20971519 May 16 02:18:33.079816 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 02:18:33.081030 kernel: GPT:17805311 != 20971519 May 16 02:18:33.081944 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 02:18:33.084540 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 02:18:33.084961 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 02:18:33.085091 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 02:18:33.086273 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 02:18:33.087122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 02:18:33.087280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 02:18:33.088959 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 02:18:33.091753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 02:18:33.094614 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 02:18:33.101288 kernel: libata version 3.00 loaded. May 16 02:18:33.103656 kernel: ata_piix 0000:00:01.1: version 2.13 May 16 02:18:33.107273 kernel: scsi host0: ata_piix May 16 02:18:33.111606 kernel: scsi host1: ata_piix May 16 02:18:33.117258 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 16 02:18:33.117301 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 16 02:18:33.139268 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (458) May 16 02:18:33.149258 kernel: BTRFS: device fsid a728581e-9e7f-4655-895a-4f66e17e3645 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (454) May 16 02:18:33.170604 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 02:18:33.181724 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 02:18:33.209299 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 02:18:33.218663 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 02:18:33.219249 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 02:18:33.230912 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 02:18:33.233387 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 02:18:33.237339 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 02:18:33.256180 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 02:18:33.258406 disk-uuid[507]: Primary Header is updated. May 16 02:18:33.258406 disk-uuid[507]: Secondary Entries is updated. May 16 02:18:33.258406 disk-uuid[507]: Secondary Header is updated. May 16 02:18:33.268074 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 02:18:34.284299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 02:18:34.286444 disk-uuid[516]: The operation has completed successfully. May 16 02:18:34.362553 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 02:18:34.362769 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 02:18:34.414767 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 02:18:34.433504 sh[527]: Success May 16 02:18:34.455288 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 16 02:18:34.554834 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 02:18:34.566367 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 02:18:34.573273 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 02:18:34.609291 kernel: BTRFS info (device dm-0): first mount of filesystem a728581e-9e7f-4655-895a-4f66e17e3645 May 16 02:18:34.609377 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 02:18:34.615468 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 02:18:34.619356 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 02:18:34.623932 kernel: BTRFS info (device dm-0): using free space tree May 16 02:18:34.644212 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 02:18:34.646488 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 02:18:34.649080 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 02:18:34.653509 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 02:18:34.700560 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 02:18:34.700639 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 02:18:34.706091 kernel: BTRFS info (device vda6): using free space tree May 16 02:18:34.715335 kernel: BTRFS info (device vda6): auto enabling async discard May 16 02:18:34.721340 kernel: BTRFS info (device vda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 02:18:34.730000 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 02:18:34.735504 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 02:18:34.782469 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 02:18:34.789955 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 02:18:34.817308 systemd-networkd[707]: lo: Link UP May 16 02:18:34.817318 systemd-networkd[707]: lo: Gained carrier May 16 02:18:34.818526 systemd-networkd[707]: Enumeration completed May 16 02:18:34.818757 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 02:18:34.819135 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 02:18:34.819139 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 02:18:34.819435 systemd[1]: Reached target network.target - Network. May 16 02:18:34.820078 systemd-networkd[707]: eth0: Link UP May 16 02:18:34.820083 systemd-networkd[707]: eth0: Gained carrier May 16 02:18:34.820092 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 02:18:34.831288 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.70/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 16 02:18:34.909329 ignition[642]: Ignition 2.20.0 May 16 02:18:34.909344 ignition[642]: Stage: fetch-offline May 16 02:18:34.909379 ignition[642]: no configs at "/usr/lib/ignition/base.d" May 16 02:18:34.911792 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 02:18:34.909388 ignition[642]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 02:18:34.909485 ignition[642]: parsed url from cmdline: "" May 16 02:18:34.909489 ignition[642]: no config URL provided May 16 02:18:34.909495 ignition[642]: reading system config file "/usr/lib/ignition/user.ign" May 16 02:18:34.909504 ignition[642]: no config at "/usr/lib/ignition/user.ign" May 16 02:18:34.909509 ignition[642]: failed to fetch config: resource requires networking May 16 02:18:34.909701 ignition[642]: Ignition finished successfully May 16 02:18:34.917521 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 16 02:18:34.957315 ignition[718]: Ignition 2.20.0 May 16 02:18:34.957345 ignition[718]: Stage: fetch May 16 02:18:34.957707 ignition[718]: no configs at "/usr/lib/ignition/base.d" May 16 02:18:34.957733 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 02:18:34.957921 ignition[718]: parsed url from cmdline: "" May 16 02:18:34.957931 ignition[718]: no config URL provided May 16 02:18:34.957944 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" May 16 02:18:34.957963 ignition[718]: no config at "/usr/lib/ignition/user.ign" May 16 02:18:34.958286 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 16 02:18:34.960827 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 16 02:18:34.960878 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 16 02:18:35.354133 ignition[718]: GET result: OK May 16 02:18:35.354583 ignition[718]: parsing config with SHA512: 200f1081e247f91239a3f73fb3e85d50479899bb503a7ea466c67396602882d15f055aa3085daadee61a243fee960dee1f186e6b9d2782f95376f71b9a1f2ce1 May 16 02:18:35.361606 unknown[718]: fetched base config from "system" May 16 02:18:35.361631 unknown[718]: fetched base config from "system" May 16 02:18:35.361646 unknown[718]: fetched user config from "openstack" May 16 02:18:35.363288 ignition[718]: fetch: fetch complete May 16 02:18:35.367143 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 16 02:18:35.363302 ignition[718]: fetch: fetch passed May 16 02:18:35.363398 ignition[718]: Ignition finished successfully May 16 02:18:35.372530 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 02:18:35.419737 ignition[725]: Ignition 2.20.0 May 16 02:18:35.419764 ignition[725]: Stage: kargs May 16 02:18:35.420154 ignition[725]: no configs at "/usr/lib/ignition/base.d" May 16 02:18:35.420182 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 02:18:35.425649 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 02:18:35.421979 ignition[725]: kargs: kargs passed May 16 02:18:35.422077 ignition[725]: Ignition finished successfully May 16 02:18:35.430569 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 02:18:35.473384 ignition[731]: Ignition 2.20.0 May 16 02:18:35.473402 ignition[731]: Stage: disks May 16 02:18:35.473803 ignition[731]: no configs at "/usr/lib/ignition/base.d" May 16 02:18:35.473829 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 02:18:35.477717 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 02:18:35.475681 ignition[731]: disks: disks passed May 16 02:18:35.481568 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 02:18:35.475780 ignition[731]: Ignition finished successfully May 16 02:18:35.483455 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 02:18:35.486068 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 02:18:35.488935 systemd[1]: Reached target sysinit.target - System Initialization. May 16 02:18:35.491374 systemd[1]: Reached target basic.target - Basic System. May 16 02:18:35.496500 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 02:18:35.543355 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 16 02:18:35.557909 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 02:18:35.562909 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 02:18:35.723299 kernel: EXT4-fs (vda9): mounted filesystem f27adc75-a467-4bfb-9c02-79a2879452a3 r/w with ordered data mode. Quota mode: none. May 16 02:18:35.723913 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 02:18:35.724773 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 02:18:35.728073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 02:18:35.732331 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 02:18:35.733639 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 02:18:35.735705 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 16 02:18:35.736991 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 02:18:35.737022 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 02:18:35.744077 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 02:18:35.748351 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 02:18:35.762337 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (748) May 16 02:18:35.780167 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 02:18:35.780223 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 02:18:35.784404 kernel: BTRFS info (device vda6): using free space tree May 16 02:18:35.796290 kernel: BTRFS info (device vda6): auto enabling async discard May 16 02:18:35.802464 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 02:18:35.873501 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory May 16 02:18:35.880036 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory May 16 02:18:35.885167 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory May 16 02:18:35.891143 initrd-setup-root[798]: cut: /sysroot/etc/gshadow: No such file or directory May 16 02:18:35.983792 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 02:18:35.985568 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 02:18:35.988361 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 02:18:35.999434 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 02:18:36.003225 kernel: BTRFS info (device vda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 02:18:36.030949 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 02:18:36.033353 ignition[866]: INFO : Ignition 2.20.0 May 16 02:18:36.033353 ignition[866]: INFO : Stage: mount May 16 02:18:36.035929 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 02:18:36.035929 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 02:18:36.035929 ignition[866]: INFO : mount: mount passed May 16 02:18:36.035929 ignition[866]: INFO : Ignition finished successfully May 16 02:18:36.035885 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 02:18:36.195785 systemd-networkd[707]: eth0: Gained IPv6LL May 16 02:18:42.951725 coreos-metadata[750]: May 16 02:18:42.951 WARN failed to locate config-drive, using the metadata service API instead May 16 02:18:42.979837 coreos-metadata[750]: May 16 02:18:42.979 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 16 02:18:42.994026 coreos-metadata[750]: May 16 02:18:42.993 INFO Fetch successful May 16 02:18:42.994026 coreos-metadata[750]: May 16 02:18:42.993 INFO wrote hostname ci-4284-0-0-n-0a916fa60e.novalocal to /sysroot/etc/hostname May 16 02:18:42.997730 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 16 02:18:42.997983 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 16 02:18:43.005442 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 02:18:43.037542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 02:18:43.075330 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (883) May 16 02:18:43.084141 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 02:18:43.084220 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 02:18:43.090410 kernel: BTRFS info (device vda6): using free space tree May 16 02:18:43.098308 kernel: BTRFS info (device vda6): auto enabling async discard May 16 02:18:43.103892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 02:18:43.146973 ignition[901]: INFO : Ignition 2.20.0 May 16 02:18:43.146973 ignition[901]: INFO : Stage: files May 16 02:18:43.150039 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 02:18:43.150039 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 02:18:43.150039 ignition[901]: DEBUG : files: compiled without relabeling support, skipping May 16 02:18:43.155666 ignition[901]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 02:18:43.155666 ignition[901]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 02:18:43.159856 ignition[901]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 02:18:43.159856 ignition[901]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 02:18:43.159856 ignition[901]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 02:18:43.157744 unknown[901]: wrote ssh authorized keys file for user: core May 16 02:18:43.168529 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 16 02:18:43.168529 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 16 02:18:43.168529 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 02:18:43.168529 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 02:18:43.168529 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 02:18:43.168529 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 02:18:43.168529 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 02:18:43.168529 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 16 02:18:43.942402 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 16 02:18:45.582894 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 02:18:45.585519 ignition[901]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 02:18:45.585519 ignition[901]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 02:18:45.585519 ignition[901]: INFO : files: files passed May 16 02:18:45.585519 ignition[901]: INFO : Ignition finished successfully May 16 02:18:45.585118 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 02:18:45.592591 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 02:18:45.595361 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 02:18:45.607027 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 02:18:45.608646 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 02:18:45.613274 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 02:18:45.613274 initrd-setup-root-after-ignition[931]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 02:18:45.618354 initrd-setup-root-after-ignition[935]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 02:18:45.616830 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 02:18:45.618981 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 02:18:45.622350 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 02:18:45.671924 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 02:18:45.673473 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 02:18:45.675421 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 02:18:45.676697 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 02:18:45.679043 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 02:18:45.680722 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 02:18:45.707967 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 02:18:45.713703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 02:18:45.744452 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 02:18:45.746167 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 02:18:45.749209 systemd[1]: Stopped target timers.target - Timer Units. May 16 02:18:45.752275 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 02:18:45.752579 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 02:18:45.755842 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 02:18:45.757688 systemd[1]: Stopped target basic.target - Basic System. May 16 02:18:45.760724 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 02:18:45.763469 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 02:18:45.766144 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 02:18:45.778552 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 02:18:45.781594 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 02:18:45.784766 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 02:18:45.787803 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 02:18:45.790905 systemd[1]: Stopped target swap.target - Swaps. May 16 02:18:45.793692 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 02:18:45.793974 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 02:18:45.797285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 02:18:45.799153 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 02:18:45.801760 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 02:18:45.802002 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 02:18:45.804969 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 02:18:45.805425 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 02:18:45.809461 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 02:18:45.809774 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 02:18:45.811507 systemd[1]: ignition-files.service: Deactivated successfully. May 16 02:18:45.811775 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 02:18:45.817665 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 02:18:45.824340 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 02:18:45.827004 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 02:18:45.828525 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 02:18:45.832619 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 02:18:45.833012 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 02:18:45.845280 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 02:18:45.845403 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 02:18:45.863347 ignition[955]: INFO : Ignition 2.20.0 May 16 02:18:45.863347 ignition[955]: INFO : Stage: umount May 16 02:18:45.866381 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 02:18:45.866381 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 02:18:45.866381 ignition[955]: INFO : umount: umount passed May 16 02:18:45.866381 ignition[955]: INFO : Ignition finished successfully May 16 02:18:45.865153 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 02:18:45.865282 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 02:18:45.869216 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 02:18:45.869329 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 02:18:45.869880 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 02:18:45.869928 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 02:18:45.870636 systemd[1]: ignition-fetch.service: Deactivated successfully. May 16 02:18:45.870679 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 16 02:18:45.871685 systemd[1]: Stopped target network.target - Network. May 16 02:18:45.872664 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 02:18:45.872711 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 02:18:45.873831 systemd[1]: Stopped target paths.target - Path Units. May 16 02:18:45.874478 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 02:18:45.878285 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 02:18:45.878932 systemd[1]: Stopped target slices.target - Slice Units. May 16 02:18:45.879965 systemd[1]: Stopped target sockets.target - Socket Units. May 16 02:18:45.881046 systemd[1]: iscsid.socket: Deactivated successfully. May 16 02:18:45.881083 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 02:18:45.882070 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 02:18:45.882103 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 02:18:45.883330 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 02:18:45.883377 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 02:18:45.884534 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 02:18:45.884573 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 02:18:45.885616 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 02:18:45.886984 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 02:18:45.889297 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 02:18:45.889902 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 02:18:45.889986 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 02:18:45.892514 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 02:18:45.892583 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 02:18:45.893714 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 02:18:45.893807 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 02:18:45.897698 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 02:18:45.897928 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 02:18:45.898024 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 02:18:45.900091 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 02:18:45.900817 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 02:18:45.900869 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 02:18:45.902333 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 02:18:45.905016 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 02:18:45.905071 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 02:18:45.906396 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 02:18:45.906441 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 02:18:45.908350 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 02:18:45.908407 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 02:18:45.909155 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 02:18:45.909200 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 02:18:45.910760 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 02:18:45.914544 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 02:18:45.914613 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 02:18:45.917949 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 02:18:45.918308 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 02:18:45.920417 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 02:18:45.920454 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 02:18:45.922580 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 02:18:45.922612 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 02:18:45.923881 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 02:18:45.923923 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 02:18:45.926441 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 02:18:45.926489 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 02:18:45.931722 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 02:18:45.931768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 02:18:45.935351 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 02:18:45.936370 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 02:18:45.936421 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 02:18:45.937699 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 02:18:45.937741 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 02:18:45.939151 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 02:18:45.939193 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 02:18:45.940427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 02:18:45.940469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 02:18:45.943048 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 02:18:45.943107 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 02:18:45.945578 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 02:18:45.946315 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 02:18:45.951417 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 02:18:45.951521 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 02:18:45.952763 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 02:18:45.956363 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 02:18:45.973901 systemd[1]: Switching root. May 16 02:18:46.012102 systemd-journald[183]: Journal stopped May 16 02:18:47.988320 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 16 02:18:47.988380 kernel: SELinux: policy capability network_peer_controls=1 May 16 02:18:47.988399 kernel: SELinux: policy capability open_perms=1 May 16 02:18:47.988411 kernel: SELinux: policy capability extended_socket_class=1 May 16 02:18:47.988426 kernel: SELinux: policy capability always_check_network=0 May 16 02:18:47.988438 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 02:18:47.988454 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 02:18:47.988466 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 02:18:47.988478 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 02:18:47.988490 kernel: audit: type=1403 audit(1747361926.722:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 02:18:47.988507 systemd[1]: Successfully loaded SELinux policy in 78.507ms. May 16 02:18:47.988525 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.594ms. May 16 02:18:47.988542 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 02:18:47.988559 systemd[1]: Detected virtualization kvm. May 16 02:18:47.988572 systemd[1]: Detected architecture x86-64. May 16 02:18:47.988585 systemd[1]: Detected first boot. May 16 02:18:47.988598 systemd[1]: Hostname set to . May 16 02:18:47.988611 systemd[1]: Initializing machine ID from VM UUID. May 16 02:18:47.988624 zram_generator::config[1001]: No configuration found. May 16 02:18:47.988638 kernel: Guest personality initialized and is inactive May 16 02:18:47.988652 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 02:18:47.988665 kernel: Initialized host personality May 16 02:18:47.988677 kernel: NET: Registered PF_VSOCK protocol family May 16 02:18:47.988690 systemd[1]: Populated /etc with preset unit settings. May 16 02:18:47.988704 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 02:18:47.988717 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 02:18:47.988731 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 02:18:47.988746 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 02:18:47.988760 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 02:18:47.988774 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 02:18:47.988787 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 02:18:47.988800 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 02:18:47.988813 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 02:18:47.988827 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 02:18:47.988840 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 02:18:47.988853 systemd[1]: Created slice user.slice - User and Session Slice. May 16 02:18:47.988867 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 02:18:47.988880 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 02:18:47.988896 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 02:18:47.988909 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 02:18:47.988923 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 02:18:47.988936 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 02:18:47.988950 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 02:18:47.988963 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 02:18:47.988978 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 02:18:47.988991 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 02:18:47.989005 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 02:18:47.989018 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 02:18:47.989031 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 02:18:47.989044 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 02:18:47.989057 systemd[1]: Reached target slices.target - Slice Units. May 16 02:18:47.989070 systemd[1]: Reached target swap.target - Swaps. May 16 02:18:47.989082 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 02:18:47.989101 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 02:18:47.989118 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 02:18:47.989131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 02:18:47.989144 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 02:18:47.989158 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 02:18:47.989171 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 02:18:47.989183 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 02:18:47.989196 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 02:18:47.989209 systemd[1]: Mounting media.mount - External Media Directory... May 16 02:18:47.989230 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 02:18:47.992299 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 02:18:47.992320 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 02:18:47.992333 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 02:18:47.992347 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 02:18:47.992361 systemd[1]: Reached target machines.target - Containers. May 16 02:18:47.992374 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 02:18:47.992387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 02:18:47.992405 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 02:18:47.992419 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 02:18:47.992432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 02:18:47.992445 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 02:18:47.992458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 02:18:47.992471 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 02:18:47.992484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 02:18:47.992497 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 02:18:47.992510 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 02:18:47.992526 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 02:18:47.992539 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 02:18:47.992552 systemd[1]: Stopped systemd-fsck-usr.service. May 16 02:18:47.992566 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 02:18:47.992579 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 02:18:47.992592 kernel: fuse: init (API version 7.39) May 16 02:18:47.992605 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 02:18:47.992618 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 02:18:47.992633 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 02:18:47.992647 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 02:18:47.992660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 02:18:47.992673 systemd[1]: verity-setup.service: Deactivated successfully. May 16 02:18:47.992686 systemd[1]: Stopped verity-setup.service. May 16 02:18:47.992702 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 02:18:47.992717 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 02:18:47.992730 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 02:18:47.992743 systemd[1]: Mounted media.mount - External Media Directory. May 16 02:18:47.992756 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 02:18:47.992771 kernel: loop: module loaded May 16 02:18:47.992784 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 02:18:47.992797 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 02:18:47.992810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 02:18:47.992822 kernel: ACPI: bus type drm_connector registered May 16 02:18:47.992835 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 02:18:47.992848 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 02:18:47.992861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 02:18:47.992874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 02:18:47.992889 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 02:18:47.992903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 02:18:47.992915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 02:18:47.992928 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 02:18:47.992941 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 02:18:47.992954 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 02:18:47.992967 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 02:18:47.992980 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 02:18:47.992993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 02:18:47.993034 systemd-journald[1088]: Collecting audit messages is disabled. May 16 02:18:47.993060 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 02:18:47.993073 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 02:18:47.993090 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 02:18:47.993104 systemd-journald[1088]: Journal started May 16 02:18:47.993133 systemd-journald[1088]: Runtime Journal (/run/log/journal/be562696422e4b5098112c1c5807b187) is 8M, max 78.2M, 70.2M free. May 16 02:18:47.552306 systemd[1]: Queued start job for default target multi-user.target. May 16 02:18:47.560392 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 02:18:47.560826 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 02:18:47.996782 systemd[1]: Started systemd-journald.service - Journal Service. May 16 02:18:47.997881 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 02:18:48.009467 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 02:18:48.010682 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 02:18:48.012639 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 02:18:48.016348 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 02:18:48.016978 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 02:18:48.017023 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 02:18:48.020612 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 02:18:48.024499 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 02:18:48.026375 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 02:18:48.028407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 02:18:48.036073 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 02:18:48.038352 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 02:18:48.038987 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 02:18:48.041606 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 02:18:48.042335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 02:18:48.044658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 02:18:48.048575 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 02:18:48.055402 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 02:18:48.059035 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 02:18:48.061446 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 02:18:48.062078 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 02:18:48.063107 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 02:18:48.082348 systemd-journald[1088]: Time spent on flushing to /var/log/journal/be562696422e4b5098112c1c5807b187 is 34.029ms for 945 entries. May 16 02:18:48.082348 systemd-journald[1088]: System Journal (/var/log/journal/be562696422e4b5098112c1c5807b187) is 8M, max 584.8M, 576.8M free. May 16 02:18:48.140950 systemd-journald[1088]: Received client request to flush runtime journal. May 16 02:18:48.140988 kernel: loop0: detected capacity change from 0 to 151640 May 16 02:18:48.100161 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 02:18:48.101537 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 02:18:48.104675 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 02:18:48.113374 udevadm[1143]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 16 02:18:48.142303 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 02:18:48.164690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 02:18:48.168753 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. May 16 02:18:48.168774 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. May 16 02:18:48.176446 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 02:18:48.183832 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 02:18:48.209274 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 02:18:48.231608 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 02:18:48.236478 kernel: loop1: detected capacity change from 0 to 224512 May 16 02:18:48.290645 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 02:18:48.293510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 02:18:48.325654 kernel: loop2: detected capacity change from 0 to 109808 May 16 02:18:48.328426 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. May 16 02:18:48.328444 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. May 16 02:18:48.336184 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 02:18:48.381530 kernel: loop3: detected capacity change from 0 to 8 May 16 02:18:48.417269 kernel: loop4: detected capacity change from 0 to 151640 May 16 02:18:48.503870 kernel: loop5: detected capacity change from 0 to 224512 May 16 02:18:48.565862 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 02:18:48.582309 kernel: loop6: detected capacity change from 0 to 109808 May 16 02:18:48.649349 kernel: loop7: detected capacity change from 0 to 8 May 16 02:18:48.649808 (sd-merge)[1168]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 16 02:18:48.651384 (sd-merge)[1168]: Merged extensions into '/usr'. May 16 02:18:48.660192 systemd[1]: Reload requested from client PID 1141 ('systemd-sysext') (unit systemd-sysext.service)... May 16 02:18:48.660396 systemd[1]: Reloading... May 16 02:18:48.764267 zram_generator::config[1192]: No configuration found. May 16 02:18:48.962747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 02:18:49.050763 systemd[1]: Reloading finished in 389 ms. May 16 02:18:49.064469 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 02:18:49.073698 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 02:18:49.081845 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 02:18:49.088391 systemd[1]: Starting ensure-sysext.service... May 16 02:18:49.091510 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 02:18:49.119867 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... May 16 02:18:49.119882 systemd[1]: Reloading... May 16 02:18:49.143820 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 02:18:49.144105 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 02:18:49.144985 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 02:18:49.146532 systemd-udevd[1251]: Using default interface naming scheme 'v255'. May 16 02:18:49.148148 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 16 02:18:49.148226 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 16 02:18:49.153932 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. May 16 02:18:49.153948 systemd-tmpfiles[1254]: Skipping /boot May 16 02:18:49.176152 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. May 16 02:18:49.176166 systemd-tmpfiles[1254]: Skipping /boot May 16 02:18:49.207301 zram_generator::config[1284]: No configuration found. May 16 02:18:49.225382 ldconfig[1136]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 02:18:49.412272 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1319) May 16 02:18:49.414924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 02:18:49.498260 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 16 02:18:49.511258 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 16 02:18:49.521609 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 02:18:49.524152 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 02:18:49.523524 systemd[1]: Reloading finished in 403 ms. May 16 02:18:49.533763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 02:18:49.536160 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 02:18:49.537428 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 02:18:49.570295 kernel: mousedev: PS/2 mouse device common for all mice May 16 02:18:49.575256 kernel: ACPI: button: Power Button [PWRF] May 16 02:18:49.600387 systemd[1]: Finished ensure-sysext.service. May 16 02:18:49.614587 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 02:18:49.617310 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 16 02:18:49.617351 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 16 02:18:49.618253 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 02:18:49.622747 kernel: Console: switching to colour dummy device 80x25 May 16 02:18:49.621352 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 02:18:49.625290 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 16 02:18:49.625339 kernel: [drm] features: -context_init May 16 02:18:49.625360 kernel: [drm] number of scanouts: 1 May 16 02:18:49.625377 kernel: [drm] number of cap sets: 0 May 16 02:18:49.629287 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 16 02:18:49.630124 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 02:18:49.630543 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 02:18:49.641836 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 16 02:18:49.641899 kernel: Console: switching to colour frame buffer device 160x50 May 16 02:18:49.642774 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 02:18:49.652377 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 16 02:18:49.660423 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 02:18:49.664630 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 02:18:49.666757 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 02:18:49.667777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 02:18:49.669345 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 02:18:49.669422 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 02:18:49.671415 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 02:18:49.675427 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 02:18:49.685123 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 02:18:49.687627 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 02:18:49.691447 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 02:18:49.694447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 02:18:49.695144 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 02:18:49.695881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 02:18:49.696482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 02:18:49.697110 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 02:18:49.698315 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 02:18:49.702970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 02:18:49.703307 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 02:18:49.714630 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 02:18:49.715310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 02:18:49.724429 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 02:18:49.724792 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 02:18:49.736922 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 02:18:49.738365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 02:18:49.740743 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 02:18:49.740971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 02:18:49.759083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 02:18:49.761479 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 02:18:49.764893 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 02:18:49.770131 augenrules[1415]: No rules May 16 02:18:49.772625 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 02:18:49.775124 systemd[1]: audit-rules.service: Deactivated successfully. May 16 02:18:49.775441 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 02:18:49.780522 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 02:18:49.791496 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 02:18:49.809343 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 02:18:49.810620 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 02:18:49.814692 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 02:18:49.828584 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 02:18:49.835676 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 02:18:49.845743 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 02:18:49.848834 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 02:18:49.857998 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 02:18:49.882643 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 02:18:49.910629 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 02:18:49.950372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 02:18:49.960730 systemd-networkd[1388]: lo: Link UP May 16 02:18:49.960739 systemd-networkd[1388]: lo: Gained carrier May 16 02:18:49.962053 systemd-networkd[1388]: Enumeration completed May 16 02:18:49.962176 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 02:18:49.965557 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 02:18:49.965567 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 02:18:49.966143 systemd-networkd[1388]: eth0: Link UP May 16 02:18:49.966148 systemd-networkd[1388]: eth0: Gained carrier May 16 02:18:49.966176 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 02:18:49.970704 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 02:18:49.976465 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 02:18:49.981326 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 02:18:49.982250 systemd[1]: Reached target time-set.target - System Time Set. May 16 02:18:49.983328 systemd-networkd[1388]: eth0: DHCPv4 address 172.24.4.70/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 16 02:18:49.984078 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. May 16 02:18:49.985007 systemd-resolved[1390]: Positive Trust Anchors: May 16 02:18:49.985017 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 02:18:49.985059 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 02:18:49.992476 systemd-resolved[1390]: Using system hostname 'ci-4284-0-0-n-0a916fa60e.novalocal'. May 16 02:18:49.994276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 02:18:49.995010 systemd[1]: Reached target network.target - Network. May 16 02:18:49.995488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 02:18:49.995931 systemd[1]: Reached target sysinit.target - System Initialization. May 16 02:18:49.998897 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 02:18:50.000777 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 02:18:50.003460 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 02:18:50.005030 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 02:18:50.006801 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 02:18:50.007950 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 02:18:50.008073 systemd[1]: Reached target paths.target - Path Units. May 16 02:18:50.009557 systemd[1]: Reached target timers.target - Timer Units. May 16 02:18:50.012446 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 02:18:50.015532 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 02:18:50.019834 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 02:18:50.021721 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 02:18:50.022349 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 02:18:50.025275 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 02:18:50.027043 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 02:18:50.029179 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 02:18:50.030684 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 02:18:50.032413 systemd[1]: Reached target sockets.target - Socket Units. May 16 02:18:50.034076 systemd[1]: Reached target basic.target - Basic System. May 16 02:18:50.035863 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 02:18:50.035898 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 02:18:50.037077 systemd[1]: Starting containerd.service - containerd container runtime... May 16 02:18:50.042427 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 16 02:18:50.053545 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 02:18:50.060219 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 02:18:50.064721 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 02:18:50.065361 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 02:18:50.072473 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 02:18:50.914494 systemd-timesyncd[1391]: Contacted time server 69.164.213.136:123 (0.flatcar.pool.ntp.org). May 16 02:18:50.914540 systemd-timesyncd[1391]: Initial clock synchronization to Fri 2025-05-16 02:18:50.914400 UTC. May 16 02:18:50.915905 systemd-resolved[1390]: Clock change detected. Flushing caches. May 16 02:18:50.917864 jq[1454]: false May 16 02:18:50.919398 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 02:18:50.926812 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 02:18:50.933639 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 02:18:50.936721 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 02:18:50.941363 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 02:18:50.943005 systemd[1]: Starting update-engine.service - Update Engine... May 16 02:18:50.946222 extend-filesystems[1457]: Found loop4 May 16 02:18:50.952941 extend-filesystems[1457]: Found loop5 May 16 02:18:50.952941 extend-filesystems[1457]: Found loop6 May 16 02:18:50.952941 extend-filesystems[1457]: Found loop7 May 16 02:18:50.952941 extend-filesystems[1457]: Found vda May 16 02:18:50.952941 extend-filesystems[1457]: Found vda1 May 16 02:18:50.952941 extend-filesystems[1457]: Found vda2 May 16 02:18:50.952941 extend-filesystems[1457]: Found vda3 May 16 02:18:50.952941 extend-filesystems[1457]: Found usr May 16 02:18:50.952941 extend-filesystems[1457]: Found vda4 May 16 02:18:50.952941 extend-filesystems[1457]: Found vda6 May 16 02:18:50.952941 extend-filesystems[1457]: Found vda7 May 16 02:18:50.952941 extend-filesystems[1457]: Found vda9 May 16 02:18:50.952941 extend-filesystems[1457]: Checking size of /dev/vda9 May 16 02:18:51.045521 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1327) May 16 02:18:50.950698 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 02:18:51.045668 extend-filesystems[1457]: Resized partition /dev/vda9 May 16 02:18:50.963361 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 02:18:51.047552 extend-filesystems[1483]: resize2fs 1.47.2 (1-Jan-2025) May 16 02:18:50.963541 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 02:18:51.052190 jq[1466]: true May 16 02:18:50.963839 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 02:18:50.964000 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 02:18:50.982213 systemd[1]: motdgen.service: Deactivated successfully. May 16 02:18:51.052677 update_engine[1464]: I20250516 02:18:51.028932 1464 main.cc:92] Flatcar Update Engine starting May 16 02:18:50.982718 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 02:18:51.052968 jq[1484]: true May 16 02:18:51.013942 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 02:18:51.050083 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 02:18:51.068380 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 16 02:18:51.064268 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 02:18:51.064063 dbus-daemon[1453]: [system] SELinux support is enabled May 16 02:18:51.069417 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 02:18:51.069445 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 02:18:51.072311 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 02:18:51.072331 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 02:18:51.079158 systemd[1]: Started update-engine.service - Update Engine. May 16 02:18:51.095784 update_engine[1464]: I20250516 02:18:51.084103 1464 update_check_scheduler.cc:74] Next update check in 11m28s May 16 02:18:51.095869 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 16 02:18:51.105912 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 02:18:51.157238 extend-filesystems[1483]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 02:18:51.157238 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 02:18:51.157238 extend-filesystems[1483]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 16 02:18:51.162495 extend-filesystems[1457]: Resized filesystem in /dev/vda9 May 16 02:18:51.162264 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 02:18:51.162500 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 02:18:51.170572 systemd-logind[1462]: New seat seat0. May 16 02:18:51.178386 bash[1506]: Updated "/home/core/.ssh/authorized_keys" May 16 02:18:51.178334 systemd-logind[1462]: Watching system buttons on /dev/input/event2 (Power Button) May 16 02:18:51.178351 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 02:18:51.178909 systemd[1]: Started systemd-logind.service - User Login Management. May 16 02:18:51.183469 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 02:18:51.194896 systemd[1]: Starting sshkeys.service... May 16 02:18:51.232561 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 16 02:18:51.238033 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 16 02:18:51.318857 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 02:18:51.479042 containerd[1485]: time="2025-05-16T02:18:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 02:18:51.480487 containerd[1485]: time="2025-05-16T02:18:51.480462324Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 16 02:18:51.493074 containerd[1485]: time="2025-05-16T02:18:51.493028332Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.893µs" May 16 02:18:51.493074 containerd[1485]: time="2025-05-16T02:18:51.493066574Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 02:18:51.493180 containerd[1485]: time="2025-05-16T02:18:51.493089677Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 02:18:51.493303 containerd[1485]: time="2025-05-16T02:18:51.493278521Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 02:18:51.493357 containerd[1485]: time="2025-05-16T02:18:51.493305752Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 02:18:51.493357 containerd[1485]: time="2025-05-16T02:18:51.493335238Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 02:18:51.493773 containerd[1485]: time="2025-05-16T02:18:51.493397484Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 02:18:51.493773 containerd[1485]: time="2025-05-16T02:18:51.493417332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 02:18:51.493773 containerd[1485]: time="2025-05-16T02:18:51.493641352Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 02:18:51.493773 containerd[1485]: time="2025-05-16T02:18:51.493658694Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 02:18:51.493773 containerd[1485]: time="2025-05-16T02:18:51.493670817Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 02:18:51.493773 containerd[1485]: time="2025-05-16T02:18:51.493681016Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 02:18:51.493773 containerd[1485]: time="2025-05-16T02:18:51.493754514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 02:18:51.494001 containerd[1485]: time="2025-05-16T02:18:51.493976651Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 02:18:51.494031 containerd[1485]: time="2025-05-16T02:18:51.494013179Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 02:18:51.494064 containerd[1485]: time="2025-05-16T02:18:51.494027476Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 02:18:51.494064 containerd[1485]: time="2025-05-16T02:18:51.494059035Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 02:18:51.494656 containerd[1485]: time="2025-05-16T02:18:51.494405224Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 02:18:51.494656 containerd[1485]: time="2025-05-16T02:18:51.494528746Z" level=info msg="metadata content store policy set" policy=shared May 16 02:18:51.508731 containerd[1485]: time="2025-05-16T02:18:51.508679967Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 02:18:51.508807 containerd[1485]: time="2025-05-16T02:18:51.508749617Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 02:18:51.508807 containerd[1485]: time="2025-05-16T02:18:51.508784713Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 02:18:51.508855 containerd[1485]: time="2025-05-16T02:18:51.508805372Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 02:18:51.508855 containerd[1485]: time="2025-05-16T02:18:51.508822995Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 02:18:51.508855 containerd[1485]: time="2025-05-16T02:18:51.508836921Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 02:18:51.508920 containerd[1485]: time="2025-05-16T02:18:51.508853582Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 02:18:51.508920 containerd[1485]: time="2025-05-16T02:18:51.508874562Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 02:18:51.508920 containerd[1485]: time="2025-05-16T02:18:51.508890451Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 02:18:51.508920 containerd[1485]: time="2025-05-16T02:18:51.508906582Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 02:18:51.508920 containerd[1485]: time="2025-05-16T02:18:51.508918023Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 02:18:51.509042 containerd[1485]: time="2025-05-16T02:18:51.508932190Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 02:18:51.509091 containerd[1485]: time="2025-05-16T02:18:51.509060330Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 02:18:51.509124 containerd[1485]: time="2025-05-16T02:18:51.509089565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 02:18:51.509124 containerd[1485]: time="2025-05-16T02:18:51.509104924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 02:18:51.509124 containerd[1485]: time="2025-05-16T02:18:51.509118429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 02:18:51.509213 containerd[1485]: time="2025-05-16T02:18:51.509138096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 02:18:51.509213 containerd[1485]: time="2025-05-16T02:18:51.509152132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 02:18:51.509213 containerd[1485]: time="2025-05-16T02:18:51.509169445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 02:18:51.509213 containerd[1485]: time="2025-05-16T02:18:51.509182539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 02:18:51.509213 containerd[1485]: time="2025-05-16T02:18:51.509195574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 02:18:51.509213 containerd[1485]: time="2025-05-16T02:18:51.509208047Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 02:18:51.509398 containerd[1485]: time="2025-05-16T02:18:51.509221292Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 02:18:51.509398 containerd[1485]: time="2025-05-16T02:18:51.509282286Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 02:18:51.509398 containerd[1485]: time="2025-05-16T02:18:51.509296613Z" level=info msg="Start snapshots syncer" May 16 02:18:51.509398 containerd[1485]: time="2025-05-16T02:18:51.509317663Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 02:18:51.509609 containerd[1485]: time="2025-05-16T02:18:51.509564586Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 02:18:51.509820 containerd[1485]: time="2025-05-16T02:18:51.509625059Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 02:18:51.509820 containerd[1485]: time="2025-05-16T02:18:51.509684561Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510382139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510436331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510453433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510466778Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510481255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510493348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510508566Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510541348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510556977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510569060Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510608223Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510626838Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 02:18:51.511015 containerd[1485]: time="2025-05-16T02:18:51.510637117Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 02:18:51.511292 containerd[1485]: time="2025-05-16T02:18:51.510652095Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 02:18:51.511292 containerd[1485]: time="2025-05-16T02:18:51.510661683Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 02:18:51.511292 containerd[1485]: time="2025-05-16T02:18:51.510677443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 02:18:51.511292 containerd[1485]: time="2025-05-16T02:18:51.510689696Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 02:18:51.511292 containerd[1485]: time="2025-05-16T02:18:51.510708040Z" level=info msg="runtime interface created" May 16 02:18:51.511292 containerd[1485]: time="2025-05-16T02:18:51.510714172Z" level=info msg="created NRI interface" May 16 02:18:51.511292 containerd[1485]: time="2025-05-16T02:18:51.510723058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 02:18:51.511292 containerd[1485]: time="2025-05-16T02:18:51.510739830Z" level=info msg="Connect containerd service" May 16 02:18:51.511292 containerd[1485]: time="2025-05-16T02:18:51.510792769Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 02:18:51.512797 containerd[1485]: time="2025-05-16T02:18:51.512755851Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 02:18:51.535703 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 02:18:51.567948 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 02:18:51.572513 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 02:18:51.589243 systemd[1]: Started sshd@0-172.24.4.70:22-172.24.4.1:34928.service - OpenSSH per-connection server daemon (172.24.4.1:34928). May 16 02:18:51.600306 systemd[1]: issuegen.service: Deactivated successfully. May 16 02:18:51.600622 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 02:18:51.607156 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 02:18:51.634826 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 02:18:51.653271 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 02:18:51.660134 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 02:18:51.661237 systemd[1]: Reached target getty.target - Login Prompts. May 16 02:18:51.706245 containerd[1485]: time="2025-05-16T02:18:51.706208371Z" level=info msg="Start subscribing containerd event" May 16 02:18:51.706535 containerd[1485]: time="2025-05-16T02:18:51.706503815Z" level=info msg="Start recovering state" May 16 02:18:51.706829 containerd[1485]: time="2025-05-16T02:18:51.706813125Z" level=info msg="Start event monitor" May 16 02:18:51.706995 containerd[1485]: time="2025-05-16T02:18:51.706980229Z" level=info msg="Start cni network conf syncer for default" May 16 02:18:51.707062 containerd[1485]: time="2025-05-16T02:18:51.707049238Z" level=info msg="Start streaming server" May 16 02:18:51.707130 containerd[1485]: time="2025-05-16T02:18:51.707117386Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 02:18:51.707297 containerd[1485]: time="2025-05-16T02:18:51.707282676Z" level=info msg="runtime interface starting up..." May 16 02:18:51.707353 containerd[1485]: time="2025-05-16T02:18:51.707341877Z" level=info msg="starting plugins..." May 16 02:18:51.707422 containerd[1485]: time="2025-05-16T02:18:51.707409003Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 02:18:51.707573 containerd[1485]: time="2025-05-16T02:18:51.706943299Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 02:18:51.707729 containerd[1485]: time="2025-05-16T02:18:51.707682546Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 02:18:51.707983 systemd[1]: Started containerd.service - containerd container runtime. May 16 02:18:51.710016 containerd[1485]: time="2025-05-16T02:18:51.709964646Z" level=info msg="containerd successfully booted in 0.231344s" May 16 02:18:52.137069 systemd-networkd[1388]: eth0: Gained IPv6LL May 16 02:18:52.138952 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 02:18:52.144555 systemd[1]: Reached target network-online.target - Network is Online. May 16 02:18:52.155015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 02:18:52.161007 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 02:18:52.204865 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 02:18:52.515205 sshd[1538]: Accepted publickey for core from 172.24.4.1 port 34928 ssh2: RSA SHA256:Cx7VPWHjnVhR1iGHoySUsUnBzgR6Ztou3pRHaMJ2+pM May 16 02:18:52.518837 sshd-session[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 02:18:52.541881 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 02:18:52.549252 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 02:18:52.573986 systemd-logind[1462]: New session 1 of user core. May 16 02:18:52.585174 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 02:18:52.592146 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 02:18:52.609193 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 02:18:52.617392 systemd-logind[1462]: New session c1 of user core. May 16 02:18:52.829929 systemd[1570]: Queued start job for default target default.target. May 16 02:18:52.835655 systemd[1570]: Created slice app.slice - User Application Slice. May 16 02:18:52.835696 systemd[1570]: Reached target paths.target - Paths. May 16 02:18:52.835831 systemd[1570]: Reached target timers.target - Timers. May 16 02:18:52.838842 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 02:18:52.849460 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 02:18:52.850411 systemd[1570]: Reached target sockets.target - Sockets. May 16 02:18:52.850632 systemd[1570]: Reached target basic.target - Basic System. May 16 02:18:52.850824 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 02:18:52.850942 systemd[1570]: Reached target default.target - Main User Target. May 16 02:18:52.850974 systemd[1570]: Startup finished in 226ms. May 16 02:18:52.859943 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 02:18:53.350274 systemd[1]: Started sshd@1-172.24.4.70:22-172.24.4.1:34944.service - OpenSSH per-connection server daemon (172.24.4.1:34944). May 16 02:18:54.403974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 02:18:54.421836 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 02:18:55.032816 sshd[1581]: Accepted publickey for core from 172.24.4.1 port 34944 ssh2: RSA SHA256:Cx7VPWHjnVhR1iGHoySUsUnBzgR6Ztou3pRHaMJ2+pM May 16 02:18:55.035544 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 02:18:55.048885 systemd-logind[1462]: New session 2 of user core. May 16 02:18:55.058204 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 02:18:55.815815 sshd[1595]: Connection closed by 172.24.4.1 port 34944 May 16 02:18:55.816419 sshd-session[1581]: pam_unix(sshd:session): session closed for user core May 16 02:18:55.837677 systemd[1]: sshd@1-172.24.4.70:22-172.24.4.1:34944.service: Deactivated successfully. May 16 02:18:55.842422 systemd[1]: session-2.scope: Deactivated successfully. May 16 02:18:55.845132 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. May 16 02:18:55.850158 systemd[1]: Started sshd@2-172.24.4.70:22-172.24.4.1:55228.service - OpenSSH per-connection server daemon (172.24.4.1:55228). May 16 02:18:55.859059 systemd-logind[1462]: Removed session 2. May 16 02:18:55.866597 kubelet[1590]: E0516 02:18:55.866535 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 02:18:55.872571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 02:18:55.873057 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 02:18:55.873866 systemd[1]: kubelet.service: Consumed 2.248s CPU time, 269.4M memory peak. May 16 02:18:56.728059 login[1547]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 16 02:18:56.728974 login[1549]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 16 02:18:56.739926 systemd-logind[1462]: New session 4 of user core. May 16 02:18:56.749521 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 02:18:56.757880 systemd-logind[1462]: New session 3 of user core. May 16 02:18:56.765200 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 02:18:57.354001 sshd[1601]: Accepted publickey for core from 172.24.4.1 port 55228 ssh2: RSA SHA256:Cx7VPWHjnVhR1iGHoySUsUnBzgR6Ztou3pRHaMJ2+pM May 16 02:18:57.356612 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 02:18:57.366214 systemd-logind[1462]: New session 5 of user core. May 16 02:18:57.380331 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 02:18:57.974202 coreos-metadata[1452]: May 16 02:18:57.974 WARN failed to locate config-drive, using the metadata service API instead May 16 02:18:58.040017 coreos-metadata[1452]: May 16 02:18:58.039 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 16 02:18:58.059666 sshd[1631]: Connection closed by 172.24.4.1 port 55228 May 16 02:18:58.060571 sshd-session[1601]: pam_unix(sshd:session): session closed for user core May 16 02:18:58.067914 systemd[1]: sshd@2-172.24.4.70:22-172.24.4.1:55228.service: Deactivated successfully. May 16 02:18:58.071541 systemd[1]: session-5.scope: Deactivated successfully. May 16 02:18:58.073515 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. May 16 02:18:58.076371 systemd-logind[1462]: Removed session 5. May 16 02:18:58.297402 coreos-metadata[1452]: May 16 02:18:58.296 INFO Fetch successful May 16 02:18:58.297402 coreos-metadata[1452]: May 16 02:18:58.296 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 16 02:18:58.307890 coreos-metadata[1452]: May 16 02:18:58.307 INFO Fetch successful May 16 02:18:58.307890 coreos-metadata[1452]: May 16 02:18:58.307 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 16 02:18:58.318131 coreos-metadata[1452]: May 16 02:18:58.318 INFO Fetch successful May 16 02:18:58.318131 coreos-metadata[1452]: May 16 02:18:58.318 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 16 02:18:58.328086 coreos-metadata[1452]: May 16 02:18:58.328 INFO Fetch successful May 16 02:18:58.328086 coreos-metadata[1452]: May 16 02:18:58.328 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 16 02:18:58.336133 coreos-metadata[1452]: May 16 02:18:58.335 INFO Fetch successful May 16 02:18:58.336133 coreos-metadata[1452]: May 16 02:18:58.335 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 16 02:18:58.346103 coreos-metadata[1452]: May 16 02:18:58.346 INFO Fetch successful May 16 02:18:58.350173 coreos-metadata[1511]: May 16 02:18:58.350 WARN failed to locate config-drive, using the metadata service API instead May 16 02:18:58.394471 coreos-metadata[1511]: May 16 02:18:58.394 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 16 02:18:58.396392 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 16 02:18:58.397920 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 02:18:58.410405 coreos-metadata[1511]: May 16 02:18:58.410 INFO Fetch successful May 16 02:18:58.410589 coreos-metadata[1511]: May 16 02:18:58.410 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 16 02:18:58.423460 coreos-metadata[1511]: May 16 02:18:58.423 INFO Fetch successful May 16 02:18:58.429090 unknown[1511]: wrote ssh authorized keys file for user: core May 16 02:18:58.477816 update-ssh-keys[1645]: Updated "/home/core/.ssh/authorized_keys" May 16 02:18:58.478846 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 16 02:18:58.482563 systemd[1]: Finished sshkeys.service. May 16 02:18:58.488183 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 02:18:58.488462 systemd[1]: Startup finished in 1.221s (kernel) + 14.890s (initrd) + 11.005s (userspace) = 27.116s. May 16 02:19:05.955225 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 02:19:05.959151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 02:19:06.372757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 02:19:06.395956 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 02:19:06.517250 kubelet[1657]: E0516 02:19:06.517027 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 02:19:06.525591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 02:19:06.526097 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 02:19:06.526993 systemd[1]: kubelet.service: Consumed 341ms CPU time, 112.5M memory peak. May 16 02:19:08.082200 systemd[1]: Started sshd@3-172.24.4.70:22-172.24.4.1:50324.service - OpenSSH per-connection server daemon (172.24.4.1:50324). May 16 02:19:09.625055 sshd[1666]: Accepted publickey for core from 172.24.4.1 port 50324 ssh2: RSA SHA256:Cx7VPWHjnVhR1iGHoySUsUnBzgR6Ztou3pRHaMJ2+pM May 16 02:19:09.627681 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 02:19:09.641709 systemd-logind[1462]: New session 6 of user core. May 16 02:19:09.651086 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 02:19:10.393555 sshd[1668]: Connection closed by 172.24.4.1 port 50324 May 16 02:19:10.390882 sshd-session[1666]: pam_unix(sshd:session): session closed for user core May 16 02:19:10.409427 systemd[1]: sshd@3-172.24.4.70:22-172.24.4.1:50324.service: Deactivated successfully. May 16 02:19:10.412653 systemd[1]: session-6.scope: Deactivated successfully. May 16 02:19:10.415710 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. May 16 02:19:10.419453 systemd[1]: Started sshd@4-172.24.4.70:22-172.24.4.1:50332.service - OpenSSH per-connection server daemon (172.24.4.1:50332). May 16 02:19:10.422719 systemd-logind[1462]: Removed session 6. May 16 02:19:11.871116 sshd[1673]: Accepted publickey for core from 172.24.4.1 port 50332 ssh2: RSA SHA256:Cx7VPWHjnVhR1iGHoySUsUnBzgR6Ztou3pRHaMJ2+pM May 16 02:19:11.873876 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 02:19:11.885885 systemd-logind[1462]: New session 7 of user core. May 16 02:19:11.897151 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 02:19:12.556405 sshd[1676]: Connection closed by 172.24.4.1 port 50332 May 16 02:19:12.555442 sshd-session[1673]: pam_unix(sshd:session): session closed for user core May 16 02:19:12.569904 systemd[1]: sshd@4-172.24.4.70:22-172.24.4.1:50332.service: Deactivated successfully. May 16 02:19:12.573513 systemd[1]: session-7.scope: Deactivated successfully. May 16 02:19:12.575240 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. May 16 02:19:12.580276 systemd[1]: Started sshd@5-172.24.4.70:22-172.24.4.1:50334.service - OpenSSH per-connection server daemon (172.24.4.1:50334). May 16 02:19:12.583450 systemd-logind[1462]: Removed session 7. May 16 02:19:13.905050 sshd[1681]: Accepted publickey for core from 172.24.4.1 port 50334 ssh2: RSA SHA256:Cx7VPWHjnVhR1iGHoySUsUnBzgR6Ztou3pRHaMJ2+pM May 16 02:19:13.907530 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 02:19:13.921546 systemd-logind[1462]: New session 8 of user core. May 16 02:19:13.928096 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 02:19:14.483302 sshd[1684]: Connection closed by 172.24.4.1 port 50334 May 16 02:19:14.484294 sshd-session[1681]: pam_unix(sshd:session): session closed for user core May 16 02:19:14.503159 systemd[1]: sshd@5-172.24.4.70:22-172.24.4.1:50334.service: Deactivated successfully. May 16 02:19:14.506160 systemd[1]: session-8.scope: Deactivated successfully. May 16 02:19:14.509193 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. May 16 02:19:14.512198 systemd[1]: Started sshd@6-172.24.4.70:22-172.24.4.1:32802.service - OpenSSH per-connection server daemon (172.24.4.1:32802). May 16 02:19:14.514461 systemd-logind[1462]: Removed session 8. May 16 02:19:15.879376 sshd[1689]: Accepted publickey for core from 172.24.4.1 port 32802 ssh2: RSA SHA256:Cx7VPWHjnVhR1iGHoySUsUnBzgR6Ztou3pRHaMJ2+pM May 16 02:19:15.882026 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 02:19:15.893711 systemd-logind[1462]: New session 9 of user core. May 16 02:19:15.905085 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 02:19:16.364975 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 02:19:16.365612 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 02:19:16.383612 sudo[1693]: pam_unix(sudo:session): session closed for user root May 16 02:19:16.644258 sshd[1692]: Connection closed by 172.24.4.1 port 32802 May 16 02:19:16.643928 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 16 02:19:16.664248 systemd[1]: sshd@6-172.24.4.70:22-172.24.4.1:32802.service: Deactivated successfully. May 16 02:19:16.667332 systemd[1]: session-9.scope: Deactivated successfully. May 16 02:19:16.670332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 02:19:16.671904 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. May 16 02:19:16.676069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 02:19:16.680161 systemd[1]: Started sshd@7-172.24.4.70:22-172.24.4.1:32806.service - OpenSSH per-connection server daemon (172.24.4.1:32806). May 16 02:19:16.685891 systemd-logind[1462]: Removed session 9. May 16 02:19:17.090914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 02:19:17.107735 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 02:19:17.183960 kubelet[1709]: E0516 02:19:17.183856 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 02:19:17.187031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 02:19:17.187166 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 02:19:17.187576 systemd[1]: kubelet.service: Consumed 288ms CPU time, 111.7M memory peak. May 16 02:19:18.081577 sshd[1699]: Accepted publickey for core from 172.24.4.1 port 32806 ssh2: RSA SHA256:Cx7VPWHjnVhR1iGHoySUsUnBzgR6Ztou3pRHaMJ2+pM May 16 02:19:18.084395 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 02:19:18.095909 systemd-logind[1462]: New session 10 of user core. May 16 02:19:18.102101 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 02:19:18.520202 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 02:19:18.521546 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 02:19:18.528967 sudo[1718]: pam_unix(sudo:session): session closed for user root May 16 02:19:18.540466 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 02:19:18.541181 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 02:19:18.562389 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 02:19:18.637868 augenrules[1740]: No rules May 16 02:19:18.640652 systemd[1]: audit-rules.service: Deactivated successfully. May 16 02:19:18.641195 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 02:19:18.643110 sudo[1717]: pam_unix(sudo:session): session closed for user root May 16 02:19:18.889640 sshd[1716]: Connection closed by 172.24.4.1 port 32806 May 16 02:19:18.891174 sshd-session[1699]: pam_unix(sshd:session): session closed for user core May 16 02:19:18.909902 systemd[1]: sshd@7-172.24.4.70:22-172.24.4.1:32806.service: Deactivated successfully. May 16 02:19:18.913093 systemd[1]: session-10.scope: Deactivated successfully. May 16 02:19:18.915002 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. May 16 02:19:18.919311 systemd[1]: Started sshd@8-172.24.4.70:22-172.24.4.1:32814.service - OpenSSH per-connection server daemon (172.24.4.1:32814). May 16 02:19:18.921508 systemd-logind[1462]: Removed session 10. May 16 02:19:19.992504 sshd[1748]: Accepted publickey for core from 172.24.4.1 port 32814 ssh2: RSA SHA256:Cx7VPWHjnVhR1iGHoySUsUnBzgR6Ztou3pRHaMJ2+pM May 16 02:19:19.995181 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 02:19:20.008895 systemd-logind[1462]: New session 11 of user core. May 16 02:19:20.025064 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 02:19:20.559098 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 02:19:20.560423 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 02:19:21.964374 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 02:19:21.964836 systemd[1]: kubelet.service: Consumed 288ms CPU time, 111.7M memory peak. May 16 02:19:21.971291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 02:19:22.038861 systemd[1]: Reload requested from client PID 1785 ('systemctl') (unit session-11.scope)... May 16 02:19:22.038893 systemd[1]: Reloading... May 16 02:19:22.145420 zram_generator::config[1826]: No configuration found. May 16 02:19:22.649019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 02:19:22.767177 systemd[1]: Reloading finished in 727 ms. May 16 02:19:22.823393 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 02:19:22.823472 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 02:19:22.823881 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 02:19:22.823919 systemd[1]: kubelet.service: Consumed 120ms CPU time, 98.3M memory peak. May 16 02:19:22.826096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 02:19:22.985491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 02:19:22.996147 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 02:19:23.073656 kubelet[1896]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 02:19:23.073656 kubelet[1896]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 02:19:23.073656 kubelet[1896]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 02:19:23.073656 kubelet[1896]: I0516 02:19:23.073645 1896 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 02:19:23.653816 kubelet[1896]: I0516 02:19:23.653750 1896 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 02:19:23.653816 kubelet[1896]: I0516 02:19:23.653789 1896 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 02:19:23.654811 kubelet[1896]: I0516 02:19:23.654419 1896 server.go:954] "Client rotation is on, will bootstrap in background" May 16 02:19:23.691607 kubelet[1896]: I0516 02:19:23.690505 1896 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 02:19:23.700055 kubelet[1896]: I0516 02:19:23.700013 1896 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 02:19:23.702940 kubelet[1896]: I0516 02:19:23.702913 1896 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 02:19:23.703180 kubelet[1896]: I0516 02:19:23.703082 1896 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 02:19:23.703311 kubelet[1896]: I0516 02:19:23.703121 1896 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.70","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 02:19:23.703311 kubelet[1896]: I0516 02:19:23.703306 1896 topology_manager.go:138] "Creating topology manager with none policy" May 16 02:19:23.703311 kubelet[1896]: I0516 02:19:23.703317 1896 container_manager_linux.go:304] "Creating device plugin manager" May 16 02:19:23.703825 kubelet[1896]: I0516 02:19:23.703430 1896 state_mem.go:36] "Initialized new in-memory state store" May 16 02:19:23.710915 kubelet[1896]: I0516 02:19:23.710857 1896 kubelet.go:446] "Attempting to sync node with API server" May 16 02:19:23.710915 kubelet[1896]: I0516 02:19:23.710909 1896 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 02:19:23.711119 kubelet[1896]: I0516 02:19:23.710932 1896 kubelet.go:352] "Adding apiserver pod source" May 16 02:19:23.711119 kubelet[1896]: I0516 02:19:23.710943 1896 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 02:19:23.718136 kubelet[1896]: E0516 02:19:23.717677 1896 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:23.718136 kubelet[1896]: E0516 02:19:23.717827 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:23.719658 kubelet[1896]: I0516 02:19:23.719033 1896 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 16 02:19:23.719658 kubelet[1896]: I0516 02:19:23.719472 1896 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 02:19:23.720661 kubelet[1896]: W0516 02:19:23.720341 1896 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 02:19:23.723041 kubelet[1896]: I0516 02:19:23.722816 1896 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 02:19:23.723041 kubelet[1896]: I0516 02:19:23.722852 1896 server.go:1287] "Started kubelet" May 16 02:19:23.724587 kubelet[1896]: I0516 02:19:23.724237 1896 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 02:19:23.726462 kubelet[1896]: I0516 02:19:23.726411 1896 server.go:479] "Adding debug handlers to kubelet server" May 16 02:19:23.729514 kubelet[1896]: I0516 02:19:23.728879 1896 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 02:19:23.729514 kubelet[1896]: I0516 02:19:23.729126 1896 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 02:19:23.731938 kubelet[1896]: I0516 02:19:23.731876 1896 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 02:19:23.734261 kubelet[1896]: I0516 02:19:23.734244 1896 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 02:19:23.738259 kubelet[1896]: I0516 02:19:23.738214 1896 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 02:19:23.740894 kubelet[1896]: I0516 02:19:23.738380 1896 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 02:19:23.741044 kubelet[1896]: E0516 02:19:23.738733 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:23.741044 kubelet[1896]: E0516 02:19:23.740095 1896 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 02:19:23.741044 kubelet[1896]: I0516 02:19:23.741020 1896 reconciler.go:26] "Reconciler: start to sync state" May 16 02:19:23.743126 kubelet[1896]: I0516 02:19:23.741307 1896 factory.go:221] Registration of the systemd container factory successfully May 16 02:19:23.743126 kubelet[1896]: I0516 02:19:23.741416 1896 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 02:19:23.754085 kubelet[1896]: W0516 02:19:23.754021 1896 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 16 02:19:23.754249 kubelet[1896]: E0516 02:19:23.754166 1896 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 16 02:19:23.754597 kubelet[1896]: W0516 02:19:23.754555 1896 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 16 02:19:23.754702 kubelet[1896]: E0516 02:19:23.754645 1896 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 16 02:19:23.758655 kubelet[1896]: E0516 02:19:23.754844 1896 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.70.183fe072b5ba7dfb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.70,UID:172.24.4.70,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.70,},FirstTimestamp:2025-05-16 02:19:23.722833403 +0000 UTC m=+0.723277057,LastTimestamp:2025-05-16 02:19:23.722833403 +0000 UTC m=+0.723277057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.70,}" May 16 02:19:23.768920 kubelet[1896]: I0516 02:19:23.767802 1896 factory.go:221] Registration of the containerd container factory successfully May 16 02:19:23.768920 kubelet[1896]: W0516 02:19:23.768240 1896 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.70" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 16 02:19:23.768920 kubelet[1896]: E0516 02:19:23.768322 1896 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.70\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 16 02:19:23.768920 kubelet[1896]: E0516 02:19:23.768478 1896 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.70\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 16 02:19:23.769792 kubelet[1896]: E0516 02:19:23.769403 1896 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.70.183fe072b6c1b867 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.70,UID:172.24.4.70,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.24.4.70,},FirstTimestamp:2025-05-16 02:19:23.740084327 +0000 UTC m=+0.740527971,LastTimestamp:2025-05-16 02:19:23.740084327 +0000 UTC m=+0.740527971,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.70,}" May 16 02:19:23.788748 kubelet[1896]: I0516 02:19:23.788705 1896 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 02:19:23.788748 kubelet[1896]: I0516 02:19:23.788729 1896 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 02:19:23.788748 kubelet[1896]: I0516 02:19:23.788744 1896 state_mem.go:36] "Initialized new in-memory state store" May 16 02:19:23.793584 kubelet[1896]: I0516 02:19:23.793547 1896 policy_none.go:49] "None policy: Start" May 16 02:19:23.793584 kubelet[1896]: I0516 02:19:23.793571 1896 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 02:19:23.793584 kubelet[1896]: I0516 02:19:23.793585 1896 state_mem.go:35] "Initializing new in-memory state store" May 16 02:19:23.797808 kubelet[1896]: E0516 02:19:23.797583 1896 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.70.183fe072b9989082 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.70,UID:172.24.4.70,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.24.4.70 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.24.4.70,},FirstTimestamp:2025-05-16 02:19:23.787718786 +0000 UTC m=+0.788162430,LastTimestamp:2025-05-16 02:19:23.787718786 +0000 UTC m=+0.788162430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.70,}" May 16 02:19:23.803426 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 02:19:23.825514 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 02:19:23.833272 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 02:19:23.840389 kubelet[1896]: I0516 02:19:23.840192 1896 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 02:19:23.841656 kubelet[1896]: E0516 02:19:23.841548 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:23.842025 kubelet[1896]: I0516 02:19:23.842011 1896 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 02:19:23.842883 kubelet[1896]: I0516 02:19:23.842107 1896 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 02:19:23.842883 kubelet[1896]: I0516 02:19:23.842132 1896 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 02:19:23.842883 kubelet[1896]: I0516 02:19:23.842150 1896 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 02:19:23.842883 kubelet[1896]: I0516 02:19:23.842158 1896 kubelet.go:2382] "Starting kubelet main sync loop" May 16 02:19:23.842883 kubelet[1896]: I0516 02:19:23.842299 1896 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 02:19:23.842883 kubelet[1896]: I0516 02:19:23.842320 1896 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 02:19:23.842883 kubelet[1896]: E0516 02:19:23.842336 1896 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 02:19:23.845017 kubelet[1896]: I0516 02:19:23.844989 1896 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 02:19:23.850571 kubelet[1896]: E0516 02:19:23.850531 1896 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 02:19:23.850707 kubelet[1896]: E0516 02:19:23.850591 1896 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.70\" not found" May 16 02:19:23.945191 kubelet[1896]: I0516 02:19:23.944376 1896 kubelet_node_status.go:75] "Attempting to register node" node="172.24.4.70" May 16 02:19:23.966728 kubelet[1896]: I0516 02:19:23.966420 1896 kubelet_node_status.go:78] "Successfully registered node" node="172.24.4.70" May 16 02:19:23.966728 kubelet[1896]: E0516 02:19:23.966481 1896 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.24.4.70\": node \"172.24.4.70\" not found" May 16 02:19:24.005389 kubelet[1896]: E0516 02:19:24.005301 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:24.078956 sudo[1752]: pam_unix(sudo:session): session closed for user root May 16 02:19:24.106384 kubelet[1896]: E0516 02:19:24.106264 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:24.206830 kubelet[1896]: E0516 02:19:24.206685 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:24.223868 sshd[1751]: Connection closed by 172.24.4.1 port 32814 May 16 02:19:24.224807 sshd-session[1748]: pam_unix(sshd:session): session closed for user core May 16 02:19:24.231150 systemd[1]: sshd@8-172.24.4.70:22-172.24.4.1:32814.service: Deactivated successfully. May 16 02:19:24.238318 systemd[1]: session-11.scope: Deactivated successfully. May 16 02:19:24.239492 systemd[1]: session-11.scope: Consumed 1.119s CPU time, 73.2M memory peak. May 16 02:19:24.247318 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. May 16 02:19:24.250760 systemd-logind[1462]: Removed session 11. May 16 02:19:24.307514 kubelet[1896]: E0516 02:19:24.307361 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:24.408360 kubelet[1896]: E0516 02:19:24.408261 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:24.509000 kubelet[1896]: E0516 02:19:24.508831 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:24.609178 kubelet[1896]: E0516 02:19:24.609090 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:24.659666 kubelet[1896]: I0516 02:19:24.659497 1896 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 16 02:19:24.660051 kubelet[1896]: W0516 02:19:24.659987 1896 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 16 02:19:24.710010 kubelet[1896]: E0516 02:19:24.709906 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:24.718663 kubelet[1896]: E0516 02:19:24.718597 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:24.811631 kubelet[1896]: E0516 02:19:24.810541 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:24.911471 kubelet[1896]: E0516 02:19:24.911305 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:25.012255 kubelet[1896]: E0516 02:19:25.011936 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:25.114373 kubelet[1896]: E0516 02:19:25.114014 1896 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.24.4.70\" not found" May 16 02:19:25.217847 kubelet[1896]: I0516 02:19:25.217723 1896 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 16 02:19:25.219483 containerd[1485]: time="2025-05-16T02:19:25.219235409Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 02:19:25.221852 kubelet[1896]: I0516 02:19:25.220289 1896 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 16 02:19:25.716975 kubelet[1896]: I0516 02:19:25.715989 1896 apiserver.go:52] "Watching apiserver" May 16 02:19:25.719627 kubelet[1896]: E0516 02:19:25.719273 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:25.730209 kubelet[1896]: E0516 02:19:25.727214 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngpwb" podUID="bbe556f2-8870-4f54-8884-f3b8b2556667" May 16 02:19:25.742155 kubelet[1896]: I0516 02:19:25.742102 1896 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 02:19:25.757426 kubelet[1896]: I0516 02:19:25.757323 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1112de68-d278-4c83-87aa-6f1aa6135129-node-certs\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.757633 kubelet[1896]: I0516 02:19:25.757461 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1112de68-d278-4c83-87aa-6f1aa6135129-xtables-lock\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.757633 kubelet[1896]: I0516 02:19:25.757531 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bbe556f2-8870-4f54-8884-f3b8b2556667-socket-dir\") pod \"csi-node-driver-ngpwb\" (UID: \"bbe556f2-8870-4f54-8884-f3b8b2556667\") " pod="calico-system/csi-node-driver-ngpwb" May 16 02:19:25.757633 kubelet[1896]: I0516 02:19:25.757591 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8760869-5f2b-4307-9c56-1da021d7a694-xtables-lock\") pod \"kube-proxy-98drc\" (UID: \"a8760869-5f2b-4307-9c56-1da021d7a694\") " pod="kube-system/kube-proxy-98drc" May 16 02:19:25.757882 kubelet[1896]: I0516 02:19:25.757679 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1112de68-d278-4c83-87aa-6f1aa6135129-flexvol-driver-host\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.758994 kubelet[1896]: I0516 02:19:25.757760 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1112de68-d278-4c83-87aa-6f1aa6135129-lib-modules\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.759804 kubelet[1896]: I0516 02:19:25.759518 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1112de68-d278-4c83-87aa-6f1aa6135129-var-lib-calico\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.759804 kubelet[1896]: I0516 02:19:25.759591 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbe556f2-8870-4f54-8884-f3b8b2556667-kubelet-dir\") pod \"csi-node-driver-ngpwb\" (UID: \"bbe556f2-8870-4f54-8884-f3b8b2556667\") " pod="calico-system/csi-node-driver-ngpwb" May 16 02:19:25.759804 kubelet[1896]: I0516 02:19:25.759657 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bbe556f2-8870-4f54-8884-f3b8b2556667-varrun\") pod \"csi-node-driver-ngpwb\" (UID: \"bbe556f2-8870-4f54-8884-f3b8b2556667\") " pod="calico-system/csi-node-driver-ngpwb" May 16 02:19:25.759804 kubelet[1896]: I0516 02:19:25.759727 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a8760869-5f2b-4307-9c56-1da021d7a694-kube-proxy\") pod \"kube-proxy-98drc\" (UID: \"a8760869-5f2b-4307-9c56-1da021d7a694\") " pod="kube-system/kube-proxy-98drc" May 16 02:19:25.762093 systemd[1]: Created slice kubepods-besteffort-pod1112de68_d278_4c83_87aa_6f1aa6135129.slice - libcontainer container kubepods-besteffort-pod1112de68_d278_4c83_87aa_6f1aa6135129.slice. May 16 02:19:25.764343 kubelet[1896]: I0516 02:19:25.763704 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8760869-5f2b-4307-9c56-1da021d7a694-lib-modules\") pod \"kube-proxy-98drc\" (UID: \"a8760869-5f2b-4307-9c56-1da021d7a694\") " pod="kube-system/kube-proxy-98drc" May 16 02:19:25.764343 kubelet[1896]: I0516 02:19:25.763888 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcfg4\" (UniqueName: \"kubernetes.io/projected/a8760869-5f2b-4307-9c56-1da021d7a694-kube-api-access-tcfg4\") pod \"kube-proxy-98drc\" (UID: \"a8760869-5f2b-4307-9c56-1da021d7a694\") " pod="kube-system/kube-proxy-98drc" May 16 02:19:25.764343 kubelet[1896]: I0516 02:19:25.763962 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1112de68-d278-4c83-87aa-6f1aa6135129-cni-bin-dir\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.764343 kubelet[1896]: I0516 02:19:25.764031 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1112de68-d278-4c83-87aa-6f1aa6135129-cni-net-dir\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.764343 kubelet[1896]: I0516 02:19:25.764097 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1112de68-d278-4c83-87aa-6f1aa6135129-policysync\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.764693 kubelet[1896]: I0516 02:19:25.764180 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1112de68-d278-4c83-87aa-6f1aa6135129-tigera-ca-bundle\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.764693 kubelet[1896]: I0516 02:19:25.764267 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bbe556f2-8870-4f54-8884-f3b8b2556667-registration-dir\") pod \"csi-node-driver-ngpwb\" (UID: \"bbe556f2-8870-4f54-8884-f3b8b2556667\") " pod="calico-system/csi-node-driver-ngpwb" May 16 02:19:25.764693 kubelet[1896]: I0516 02:19:25.764387 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4tqf\" (UniqueName: \"kubernetes.io/projected/bbe556f2-8870-4f54-8884-f3b8b2556667-kube-api-access-z4tqf\") pod \"csi-node-driver-ngpwb\" (UID: \"bbe556f2-8870-4f54-8884-f3b8b2556667\") " pod="calico-system/csi-node-driver-ngpwb" May 16 02:19:25.764693 kubelet[1896]: I0516 02:19:25.764459 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1112de68-d278-4c83-87aa-6f1aa6135129-cni-log-dir\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.764693 kubelet[1896]: I0516 02:19:25.764528 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1112de68-d278-4c83-87aa-6f1aa6135129-var-run-calico\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.765073 kubelet[1896]: I0516 02:19:25.764614 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47jq6\" (UniqueName: \"kubernetes.io/projected/1112de68-d278-4c83-87aa-6f1aa6135129-kube-api-access-47jq6\") pod \"calico-node-rxltt\" (UID: \"1112de68-d278-4c83-87aa-6f1aa6135129\") " pod="calico-system/calico-node-rxltt" May 16 02:19:25.807610 systemd[1]: Created slice kubepods-besteffort-poda8760869_5f2b_4307_9c56_1da021d7a694.slice - libcontainer container kubepods-besteffort-poda8760869_5f2b_4307_9c56_1da021d7a694.slice. May 16 02:19:25.883478 kubelet[1896]: E0516 02:19:25.883407 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:25.883827 kubelet[1896]: W0516 02:19:25.883709 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:25.884064 kubelet[1896]: E0516 02:19:25.884025 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:25.911864 kubelet[1896]: E0516 02:19:25.911739 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:25.914508 kubelet[1896]: W0516 02:19:25.914440 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:25.915285 kubelet[1896]: E0516 02:19:25.915022 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:25.927306 kubelet[1896]: E0516 02:19:25.927243 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:25.927306 kubelet[1896]: W0516 02:19:25.927263 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:25.927306 kubelet[1896]: E0516 02:19:25.927289 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:25.930798 kubelet[1896]: E0516 02:19:25.929645 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:25.930798 kubelet[1896]: W0516 02:19:25.929661 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:25.930798 kubelet[1896]: E0516 02:19:25.929674 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:26.104930 containerd[1485]: time="2025-05-16T02:19:26.104545571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rxltt,Uid:1112de68-d278-4c83-87aa-6f1aa6135129,Namespace:calico-system,Attempt:0,}" May 16 02:19:26.114042 containerd[1485]: time="2025-05-16T02:19:26.113954200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98drc,Uid:a8760869-5f2b-4307-9c56-1da021d7a694,Namespace:kube-system,Attempt:0,}" May 16 02:19:26.720721 kubelet[1896]: E0516 02:19:26.720558 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:26.818681 containerd[1485]: time="2025-05-16T02:19:26.818466488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 02:19:26.824265 containerd[1485]: time="2025-05-16T02:19:26.824064860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 16 02:19:26.826144 containerd[1485]: time="2025-05-16T02:19:26.825948212Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 02:19:26.829465 containerd[1485]: time="2025-05-16T02:19:26.829233999Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 16 02:19:26.832045 containerd[1485]: time="2025-05-16T02:19:26.831900413Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 02:19:26.837012 containerd[1485]: time="2025-05-16T02:19:26.836862642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 02:19:26.840275 containerd[1485]: time="2025-05-16T02:19:26.839320632Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 703.076058ms" May 16 02:19:26.843916 containerd[1485]: time="2025-05-16T02:19:26.843631708Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 719.527307ms" May 16 02:19:26.907142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount294692029.mount: Deactivated successfully. May 16 02:19:26.911386 containerd[1485]: time="2025-05-16T02:19:26.911326029Z" level=info msg="connecting to shim 5fc37ea1a01c549841114e7d9413dc348f80318fd6099caee3f2d6615bd099e2" address="unix:///run/containerd/s/b1b0526d7d6445337931fd5008a238c584d200cfeac4ea53b5a607e6431217b2" namespace=k8s.io protocol=ttrpc version=3 May 16 02:19:26.926132 containerd[1485]: time="2025-05-16T02:19:26.926065700Z" level=info msg="connecting to shim c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9" address="unix:///run/containerd/s/14e4974d2d4ea702c29b9d3e9f5fa56d2e07c3ee7bfca8a0599f3c8720ff502b" namespace=k8s.io protocol=ttrpc version=3 May 16 02:19:26.961000 systemd[1]: Started cri-containerd-5fc37ea1a01c549841114e7d9413dc348f80318fd6099caee3f2d6615bd099e2.scope - libcontainer container 5fc37ea1a01c549841114e7d9413dc348f80318fd6099caee3f2d6615bd099e2. May 16 02:19:27.038977 systemd[1]: Started cri-containerd-c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9.scope - libcontainer container c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9. May 16 02:19:27.053382 containerd[1485]: time="2025-05-16T02:19:27.053297629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98drc,Uid:a8760869-5f2b-4307-9c56-1da021d7a694,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fc37ea1a01c549841114e7d9413dc348f80318fd6099caee3f2d6615bd099e2\"" May 16 02:19:27.057733 containerd[1485]: time="2025-05-16T02:19:27.057599926Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 02:19:27.089336 containerd[1485]: time="2025-05-16T02:19:27.089290755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rxltt,Uid:1112de68-d278-4c83-87aa-6f1aa6135129,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9\"" May 16 02:19:27.721600 kubelet[1896]: E0516 02:19:27.721060 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:27.844840 kubelet[1896]: E0516 02:19:27.844463 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngpwb" podUID="bbe556f2-8870-4f54-8884-f3b8b2556667" May 16 02:19:28.541475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423111848.mount: Deactivated successfully. May 16 02:19:28.722070 kubelet[1896]: E0516 02:19:28.721949 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:29.209913 containerd[1485]: time="2025-05-16T02:19:29.209858686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:29.211160 containerd[1485]: time="2025-05-16T02:19:29.211120441Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892880" May 16 02:19:29.212469 containerd[1485]: time="2025-05-16T02:19:29.212423193Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:29.215055 containerd[1485]: time="2025-05-16T02:19:29.215029359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:29.216019 containerd[1485]: time="2025-05-16T02:19:29.215634291Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.157994549s" May 16 02:19:29.216019 containerd[1485]: time="2025-05-16T02:19:29.215675168Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 16 02:19:29.218333 containerd[1485]: time="2025-05-16T02:19:29.218114961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 16 02:19:29.218454 containerd[1485]: time="2025-05-16T02:19:29.218423956Z" level=info msg="CreateContainer within sandbox \"5fc37ea1a01c549841114e7d9413dc348f80318fd6099caee3f2d6615bd099e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 02:19:29.235795 containerd[1485]: time="2025-05-16T02:19:29.234474506Z" level=info msg="Container c7a8476f16a39bea42b92b8a789b9434ca125c3a070fd6d1a943f07269e66a60: CDI devices from CRI Config.CDIDevices: []" May 16 02:19:29.250732 containerd[1485]: time="2025-05-16T02:19:29.250591256Z" level=info msg="CreateContainer within sandbox \"5fc37ea1a01c549841114e7d9413dc348f80318fd6099caee3f2d6615bd099e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7a8476f16a39bea42b92b8a789b9434ca125c3a070fd6d1a943f07269e66a60\"" May 16 02:19:29.251644 containerd[1485]: time="2025-05-16T02:19:29.251500203Z" level=info msg="StartContainer for \"c7a8476f16a39bea42b92b8a789b9434ca125c3a070fd6d1a943f07269e66a60\"" May 16 02:19:29.254457 containerd[1485]: time="2025-05-16T02:19:29.254365849Z" level=info msg="connecting to shim c7a8476f16a39bea42b92b8a789b9434ca125c3a070fd6d1a943f07269e66a60" address="unix:///run/containerd/s/b1b0526d7d6445337931fd5008a238c584d200cfeac4ea53b5a607e6431217b2" protocol=ttrpc version=3 May 16 02:19:29.291105 systemd[1]: Started cri-containerd-c7a8476f16a39bea42b92b8a789b9434ca125c3a070fd6d1a943f07269e66a60.scope - libcontainer container c7a8476f16a39bea42b92b8a789b9434ca125c3a070fd6d1a943f07269e66a60. May 16 02:19:29.353817 containerd[1485]: time="2025-05-16T02:19:29.353663348Z" level=info msg="StartContainer for \"c7a8476f16a39bea42b92b8a789b9434ca125c3a070fd6d1a943f07269e66a60\" returns successfully" May 16 02:19:29.723001 kubelet[1896]: E0516 02:19:29.722451 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:29.845948 kubelet[1896]: E0516 02:19:29.844630 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngpwb" podUID="bbe556f2-8870-4f54-8884-f3b8b2556667" May 16 02:19:29.905940 kubelet[1896]: I0516 02:19:29.905390 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-98drc" podStartSLOduration=3.745362167 podStartE2EDuration="5.905354052s" podCreationTimestamp="2025-05-16 02:19:24 +0000 UTC" firstStartedPulling="2025-05-16 02:19:27.057026501 +0000 UTC m=+4.057470155" lastFinishedPulling="2025-05-16 02:19:29.217018386 +0000 UTC m=+6.217462040" observedRunningTime="2025-05-16 02:19:29.904440816 +0000 UTC m=+6.904884480" watchObservedRunningTime="2025-05-16 02:19:29.905354052 +0000 UTC m=+6.905797706" May 16 02:19:29.968871 kubelet[1896]: E0516 02:19:29.968784 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.968871 kubelet[1896]: W0516 02:19:29.968818 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.969384 kubelet[1896]: E0516 02:19:29.969141 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.969898 kubelet[1896]: E0516 02:19:29.969682 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.969898 kubelet[1896]: W0516 02:19:29.969721 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.969898 kubelet[1896]: E0516 02:19:29.969743 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.970748 kubelet[1896]: E0516 02:19:29.970385 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.970748 kubelet[1896]: W0516 02:19:29.970403 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.970748 kubelet[1896]: E0516 02:19:29.970423 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.971369 kubelet[1896]: E0516 02:19:29.971216 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.971369 kubelet[1896]: W0516 02:19:29.971236 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.971369 kubelet[1896]: E0516 02:19:29.971254 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.971754 kubelet[1896]: E0516 02:19:29.971499 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.971754 kubelet[1896]: W0516 02:19:29.971513 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.971754 kubelet[1896]: E0516 02:19:29.971529 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.971988 kubelet[1896]: E0516 02:19:29.971973 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.972073 kubelet[1896]: W0516 02:19:29.972058 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.972174 kubelet[1896]: E0516 02:19:29.972149 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.972519 kubelet[1896]: E0516 02:19:29.972503 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.972752 kubelet[1896]: W0516 02:19:29.972610 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.972752 kubelet[1896]: E0516 02:19:29.972633 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.973104 kubelet[1896]: E0516 02:19:29.973035 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.973325 kubelet[1896]: W0516 02:19:29.973178 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.973325 kubelet[1896]: E0516 02:19:29.973198 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.974364 kubelet[1896]: E0516 02:19:29.974000 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.974364 kubelet[1896]: W0516 02:19:29.974030 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.974364 kubelet[1896]: E0516 02:19:29.974051 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.974864 kubelet[1896]: E0516 02:19:29.974553 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.974864 kubelet[1896]: W0516 02:19:29.974567 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.974864 kubelet[1896]: E0516 02:19:29.974582 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.974864 kubelet[1896]: E0516 02:19:29.974818 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.974864 kubelet[1896]: W0516 02:19:29.974840 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.974864 kubelet[1896]: E0516 02:19:29.974850 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.975277 kubelet[1896]: E0516 02:19:29.975059 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.975277 kubelet[1896]: W0516 02:19:29.975078 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.975277 kubelet[1896]: E0516 02:19:29.975094 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.975404 kubelet[1896]: E0516 02:19:29.975302 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.975404 kubelet[1896]: W0516 02:19:29.975313 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.975404 kubelet[1896]: E0516 02:19:29.975323 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.975646 kubelet[1896]: E0516 02:19:29.975517 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.975646 kubelet[1896]: W0516 02:19:29.975527 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.975646 kubelet[1896]: E0516 02:19:29.975540 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.975921 kubelet[1896]: E0516 02:19:29.975746 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.975921 kubelet[1896]: W0516 02:19:29.975784 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.975921 kubelet[1896]: E0516 02:19:29.975798 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.976116 kubelet[1896]: E0516 02:19:29.976003 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.976116 kubelet[1896]: W0516 02:19:29.976019 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.976116 kubelet[1896]: E0516 02:19:29.976030 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.976293 kubelet[1896]: E0516 02:19:29.976233 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.976293 kubelet[1896]: W0516 02:19:29.976246 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.976293 kubelet[1896]: E0516 02:19:29.976260 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.976532 kubelet[1896]: E0516 02:19:29.976458 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.976532 kubelet[1896]: W0516 02:19:29.976474 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.976532 kubelet[1896]: E0516 02:19:29.976484 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.976717 kubelet[1896]: E0516 02:19:29.976691 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.976717 kubelet[1896]: W0516 02:19:29.976708 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.976922 kubelet[1896]: E0516 02:19:29.976718 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.977022 kubelet[1896]: E0516 02:19:29.976977 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.977022 kubelet[1896]: W0516 02:19:29.976991 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.977022 kubelet[1896]: E0516 02:19:29.977005 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.995649 kubelet[1896]: E0516 02:19:29.995545 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.995649 kubelet[1896]: W0516 02:19:29.995565 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.995649 kubelet[1896]: E0516 02:19:29.995583 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.996032 kubelet[1896]: E0516 02:19:29.995826 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.996032 kubelet[1896]: W0516 02:19:29.995837 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.996245 kubelet[1896]: E0516 02:19:29.996218 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.996245 kubelet[1896]: W0516 02:19:29.996237 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.996338 kubelet[1896]: E0516 02:19:29.996249 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.996393 kubelet[1896]: E0516 02:19:29.996353 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.996536 kubelet[1896]: E0516 02:19:29.996511 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.996536 kubelet[1896]: W0516 02:19:29.996526 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.996536 kubelet[1896]: E0516 02:19:29.996542 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.996775 kubelet[1896]: E0516 02:19:29.996746 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.996775 kubelet[1896]: W0516 02:19:29.996773 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.996882 kubelet[1896]: E0516 02:19:29.996794 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.997101 kubelet[1896]: E0516 02:19:29.997084 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.997101 kubelet[1896]: W0516 02:19:29.997099 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.997183 kubelet[1896]: E0516 02:19:29.997113 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.997589 kubelet[1896]: E0516 02:19:29.997572 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.997589 kubelet[1896]: W0516 02:19:29.997586 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.997827 kubelet[1896]: E0516 02:19:29.997656 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.997827 kubelet[1896]: E0516 02:19:29.997790 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.997827 kubelet[1896]: W0516 02:19:29.997801 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.997827 kubelet[1896]: E0516 02:19:29.997815 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.998094 kubelet[1896]: E0516 02:19:29.998077 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.998132 kubelet[1896]: W0516 02:19:29.998111 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.998132 kubelet[1896]: E0516 02:19:29.998123 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.998420 kubelet[1896]: E0516 02:19:29.998403 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.998420 kubelet[1896]: W0516 02:19:29.998418 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.999237 kubelet[1896]: E0516 02:19:29.998854 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.999237 kubelet[1896]: E0516 02:19:29.999123 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.999237 kubelet[1896]: W0516 02:19:29.999135 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:29.999237 kubelet[1896]: E0516 02:19:29.999145 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:29.999970 kubelet[1896]: E0516 02:19:29.999940 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:29.999970 kubelet[1896]: W0516 02:19:29.999967 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.000060 kubelet[1896]: E0516 02:19:29.999978 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.723504 kubelet[1896]: E0516 02:19:30.723361 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:30.885821 kubelet[1896]: E0516 02:19:30.885497 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.885821 kubelet[1896]: W0516 02:19:30.885604 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.885821 kubelet[1896]: E0516 02:19:30.885666 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.887238 kubelet[1896]: E0516 02:19:30.886843 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.887238 kubelet[1896]: W0516 02:19:30.886876 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.887238 kubelet[1896]: E0516 02:19:30.886948 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.888446 kubelet[1896]: E0516 02:19:30.888060 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.888446 kubelet[1896]: W0516 02:19:30.888092 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.888446 kubelet[1896]: E0516 02:19:30.888163 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.889453 kubelet[1896]: E0516 02:19:30.889172 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.889453 kubelet[1896]: W0516 02:19:30.889203 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.889453 kubelet[1896]: E0516 02:19:30.889274 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.890650 kubelet[1896]: E0516 02:19:30.890308 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.890650 kubelet[1896]: W0516 02:19:30.890340 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.890650 kubelet[1896]: E0516 02:19:30.890410 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.891546 kubelet[1896]: E0516 02:19:30.891331 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.891546 kubelet[1896]: W0516 02:19:30.891362 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.891546 kubelet[1896]: E0516 02:19:30.891387 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.892687 kubelet[1896]: E0516 02:19:30.892247 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.892687 kubelet[1896]: W0516 02:19:30.892278 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.892687 kubelet[1896]: E0516 02:19:30.892351 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.893401 kubelet[1896]: E0516 02:19:30.893081 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.893401 kubelet[1896]: W0516 02:19:30.893107 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.893401 kubelet[1896]: E0516 02:19:30.893132 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.894525 kubelet[1896]: E0516 02:19:30.894482 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.895233 kubelet[1896]: W0516 02:19:30.894850 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.895233 kubelet[1896]: E0516 02:19:30.894923 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.896427 kubelet[1896]: E0516 02:19:30.896102 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.896427 kubelet[1896]: W0516 02:19:30.896152 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.896427 kubelet[1896]: E0516 02:19:30.896198 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.897483 kubelet[1896]: E0516 02:19:30.897240 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.897483 kubelet[1896]: W0516 02:19:30.897286 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.897483 kubelet[1896]: E0516 02:19:30.897322 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.899128 kubelet[1896]: E0516 02:19:30.898679 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.899128 kubelet[1896]: W0516 02:19:30.898726 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.899128 kubelet[1896]: E0516 02:19:30.898836 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.900353 kubelet[1896]: E0516 02:19:30.899970 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.900353 kubelet[1896]: W0516 02:19:30.900021 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.900353 kubelet[1896]: E0516 02:19:30.900068 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.901334 kubelet[1896]: E0516 02:19:30.900980 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.901334 kubelet[1896]: W0516 02:19:30.901009 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.901334 kubelet[1896]: E0516 02:19:30.901033 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.902699 kubelet[1896]: E0516 02:19:30.902116 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.902699 kubelet[1896]: W0516 02:19:30.902152 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.902699 kubelet[1896]: E0516 02:19:30.902191 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.904186 kubelet[1896]: E0516 02:19:30.903500 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.904186 kubelet[1896]: W0516 02:19:30.903551 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.904186 kubelet[1896]: E0516 02:19:30.903593 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.905875 kubelet[1896]: E0516 02:19:30.905195 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.905875 kubelet[1896]: W0516 02:19:30.905224 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.905875 kubelet[1896]: E0516 02:19:30.905249 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.907867 kubelet[1896]: E0516 02:19:30.906580 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.907867 kubelet[1896]: W0516 02:19:30.906641 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.907867 kubelet[1896]: E0516 02:19:30.906678 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.908839 kubelet[1896]: E0516 02:19:30.908477 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.908839 kubelet[1896]: W0516 02:19:30.908519 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.908839 kubelet[1896]: E0516 02:19:30.908561 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.910203 kubelet[1896]: E0516 02:19:30.909725 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.910203 kubelet[1896]: W0516 02:19:30.909872 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.910203 kubelet[1896]: E0516 02:19:30.909922 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.913283 kubelet[1896]: E0516 02:19:30.911965 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.913283 kubelet[1896]: W0516 02:19:30.912005 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.913283 kubelet[1896]: E0516 02:19:30.912031 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.914114 kubelet[1896]: E0516 02:19:30.914080 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.914526 kubelet[1896]: W0516 02:19:30.914416 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.915306 kubelet[1896]: E0516 02:19:30.915235 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.916605 kubelet[1896]: E0516 02:19:30.916137 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.916605 kubelet[1896]: W0516 02:19:30.916179 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.916605 kubelet[1896]: E0516 02:19:30.916274 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.919761 kubelet[1896]: E0516 02:19:30.919212 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.919761 kubelet[1896]: W0516 02:19:30.919265 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.919761 kubelet[1896]: E0516 02:19:30.919324 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.922124 kubelet[1896]: E0516 02:19:30.922072 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.922921 kubelet[1896]: W0516 02:19:30.922418 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.922921 kubelet[1896]: E0516 02:19:30.922544 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.923386 kubelet[1896]: E0516 02:19:30.923356 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.923543 kubelet[1896]: W0516 02:19:30.923502 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.923892 kubelet[1896]: E0516 02:19:30.923703 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.925011 kubelet[1896]: E0516 02:19:30.924853 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.925011 kubelet[1896]: W0516 02:19:30.924883 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.925011 kubelet[1896]: E0516 02:19:30.924936 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.926689 kubelet[1896]: E0516 02:19:30.925663 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.926689 kubelet[1896]: W0516 02:19:30.925692 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.927450 kubelet[1896]: E0516 02:19:30.926993 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.927666 kubelet[1896]: E0516 02:19:30.927638 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.927896 kubelet[1896]: W0516 02:19:30.927864 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.928080 kubelet[1896]: E0516 02:19:30.928052 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.929133 kubelet[1896]: E0516 02:19:30.929103 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.930816 kubelet[1896]: W0516 02:19:30.929334 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.931068 kubelet[1896]: E0516 02:19:30.931035 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.931911 kubelet[1896]: E0516 02:19:30.931881 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.932327 kubelet[1896]: W0516 02:19:30.932091 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.932327 kubelet[1896]: E0516 02:19:30.932139 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:30.932619 kubelet[1896]: E0516 02:19:30.932592 1896 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 02:19:30.932914 kubelet[1896]: W0516 02:19:30.932828 1896 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 02:19:30.932914 kubelet[1896]: E0516 02:19:30.932868 1896 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 02:19:31.073199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509753129.mount: Deactivated successfully. May 16 02:19:31.212731 containerd[1485]: time="2025-05-16T02:19:31.212647997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:31.213936 containerd[1485]: time="2025-05-16T02:19:31.213825979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5934460" May 16 02:19:31.215260 containerd[1485]: time="2025-05-16T02:19:31.215200403Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:31.217843 containerd[1485]: time="2025-05-16T02:19:31.217794737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:31.219379 containerd[1485]: time="2025-05-16T02:19:31.218993573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 2.000823506s" May 16 02:19:31.219379 containerd[1485]: time="2025-05-16T02:19:31.219097009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 16 02:19:31.221849 containerd[1485]: time="2025-05-16T02:19:31.221814691Z" level=info msg="CreateContainer within sandbox \"c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 02:19:31.236150 containerd[1485]: time="2025-05-16T02:19:31.236098171Z" level=info msg="Container d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72: CDI devices from CRI Config.CDIDevices: []" May 16 02:19:31.253054 containerd[1485]: time="2025-05-16T02:19:31.252896329Z" level=info msg="CreateContainer within sandbox \"c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72\"" May 16 02:19:31.253940 containerd[1485]: time="2025-05-16T02:19:31.253745213Z" level=info msg="StartContainer for \"d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72\"" May 16 02:19:31.256513 containerd[1485]: time="2025-05-16T02:19:31.256475181Z" level=info msg="connecting to shim d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72" address="unix:///run/containerd/s/14e4974d2d4ea702c29b9d3e9f5fa56d2e07c3ee7bfca8a0599f3c8720ff502b" protocol=ttrpc version=3 May 16 02:19:31.294989 systemd[1]: Started cri-containerd-d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72.scope - libcontainer container d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72. May 16 02:19:31.348518 containerd[1485]: time="2025-05-16T02:19:31.348337334Z" level=info msg="StartContainer for \"d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72\" returns successfully" May 16 02:19:31.356943 systemd[1]: cri-containerd-d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72.scope: Deactivated successfully. May 16 02:19:31.361680 containerd[1485]: time="2025-05-16T02:19:31.361450798Z" level=info msg="received exit event container_id:\"d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72\" id:\"d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72\" pid:2293 exited_at:{seconds:1747361971 nanos:360207839}" May 16 02:19:31.361946 containerd[1485]: time="2025-05-16T02:19:31.361819288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72\" id:\"d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72\" pid:2293 exited_at:{seconds:1747361971 nanos:360207839}" May 16 02:19:31.724086 kubelet[1896]: E0516 02:19:31.724023 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:31.844357 kubelet[1896]: E0516 02:19:31.843615 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngpwb" podUID="bbe556f2-8870-4f54-8884-f3b8b2556667" May 16 02:19:32.009386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72-rootfs.mount: Deactivated successfully. May 16 02:19:32.725324 kubelet[1896]: E0516 02:19:32.725261 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:32.902233 containerd[1485]: time="2025-05-16T02:19:32.902135797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 16 02:19:33.726261 kubelet[1896]: E0516 02:19:33.726162 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:33.843726 kubelet[1896]: E0516 02:19:33.843085 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngpwb" podUID="bbe556f2-8870-4f54-8884-f3b8b2556667" May 16 02:19:34.727292 kubelet[1896]: E0516 02:19:34.727241 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:35.728199 kubelet[1896]: E0516 02:19:35.727942 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:35.846823 kubelet[1896]: E0516 02:19:35.845657 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngpwb" podUID="bbe556f2-8870-4f54-8884-f3b8b2556667" May 16 02:19:36.372216 update_engine[1464]: I20250516 02:19:36.371910 1464 update_attempter.cc:509] Updating boot flags... May 16 02:19:36.426265 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2338) May 16 02:19:36.531990 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2337) May 16 02:19:36.728839 kubelet[1896]: E0516 02:19:36.728782 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:37.729396 kubelet[1896]: E0516 02:19:37.729051 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:37.765814 containerd[1485]: time="2025-05-16T02:19:37.765404791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:37.767628 containerd[1485]: time="2025-05-16T02:19:37.767559016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 16 02:19:37.769707 containerd[1485]: time="2025-05-16T02:19:37.768806538Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:37.772391 containerd[1485]: time="2025-05-16T02:19:37.772329339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:37.773269 containerd[1485]: time="2025-05-16T02:19:37.773092372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.870888623s" May 16 02:19:37.773269 containerd[1485]: time="2025-05-16T02:19:37.773144527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 16 02:19:37.776599 containerd[1485]: time="2025-05-16T02:19:37.776549530Z" level=info msg="CreateContainer within sandbox \"c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 02:19:37.794208 containerd[1485]: time="2025-05-16T02:19:37.794116623Z" level=info msg="Container 7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9: CDI devices from CRI Config.CDIDevices: []" May 16 02:19:37.823545 containerd[1485]: time="2025-05-16T02:19:37.823320769Z" level=info msg="CreateContainer within sandbox \"c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9\"" May 16 02:19:37.824821 containerd[1485]: time="2025-05-16T02:19:37.824694006Z" level=info msg="StartContainer for \"7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9\"" May 16 02:19:37.830306 containerd[1485]: time="2025-05-16T02:19:37.830202060Z" level=info msg="connecting to shim 7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9" address="unix:///run/containerd/s/14e4974d2d4ea702c29b9d3e9f5fa56d2e07c3ee7bfca8a0599f3c8720ff502b" protocol=ttrpc version=3 May 16 02:19:37.848394 kubelet[1896]: E0516 02:19:37.843804 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngpwb" podUID="bbe556f2-8870-4f54-8884-f3b8b2556667" May 16 02:19:37.894058 systemd[1]: Started cri-containerd-7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9.scope - libcontainer container 7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9. May 16 02:19:37.984620 containerd[1485]: time="2025-05-16T02:19:37.984454424Z" level=info msg="StartContainer for \"7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9\" returns successfully" May 16 02:19:38.731421 kubelet[1896]: E0516 02:19:38.731262 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:39.732432 kubelet[1896]: E0516 02:19:39.732139 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:39.844303 kubelet[1896]: E0516 02:19:39.844101 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngpwb" podUID="bbe556f2-8870-4f54-8884-f3b8b2556667" May 16 02:19:39.855472 containerd[1485]: time="2025-05-16T02:19:39.855076917Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 02:19:39.863588 systemd[1]: cri-containerd-7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9.scope: Deactivated successfully. May 16 02:19:39.865353 systemd[1]: cri-containerd-7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9.scope: Consumed 1.243s CPU time, 191.8M memory peak, 170.9M written to disk. May 16 02:19:39.867817 containerd[1485]: time="2025-05-16T02:19:39.867455180Z" level=info msg="received exit event container_id:\"7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9\" id:\"7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9\" pid:2363 exited_at:{seconds:1747361979 nanos:866208202}" May 16 02:19:39.872400 containerd[1485]: time="2025-05-16T02:19:39.870398749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9\" id:\"7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9\" pid:2363 exited_at:{seconds:1747361979 nanos:866208202}" May 16 02:19:39.876826 kubelet[1896]: I0516 02:19:39.875438 1896 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 02:19:39.936549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9-rootfs.mount: Deactivated successfully. May 16 02:19:40.733550 kubelet[1896]: E0516 02:19:40.733370 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:40.956725 containerd[1485]: time="2025-05-16T02:19:40.956599223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 16 02:19:41.734690 kubelet[1896]: E0516 02:19:41.734212 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:41.861987 systemd[1]: Created slice kubepods-besteffort-podbbe556f2_8870_4f54_8884_f3b8b2556667.slice - libcontainer container kubepods-besteffort-podbbe556f2_8870_4f54_8884_f3b8b2556667.slice. May 16 02:19:41.869474 containerd[1485]: time="2025-05-16T02:19:41.869352239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngpwb,Uid:bbe556f2-8870-4f54-8884-f3b8b2556667,Namespace:calico-system,Attempt:0,}" May 16 02:19:41.982997 containerd[1485]: time="2025-05-16T02:19:41.982911288Z" level=error msg="Failed to destroy network for sandbox \"b5670747ba0651898a88617f7cfd590bc8f747276616321b2acf390fad261eec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 02:19:41.985443 systemd[1]: run-netns-cni\x2d59f339f9\x2d9cbe\x2d98bf\x2d876c\x2dcb294c460109.mount: Deactivated successfully. May 16 02:19:41.989501 containerd[1485]: time="2025-05-16T02:19:41.989354068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngpwb,Uid:bbe556f2-8870-4f54-8884-f3b8b2556667,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5670747ba0651898a88617f7cfd590bc8f747276616321b2acf390fad261eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 02:19:41.990559 kubelet[1896]: E0516 02:19:41.989922 1896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5670747ba0651898a88617f7cfd590bc8f747276616321b2acf390fad261eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 02:19:41.990559 kubelet[1896]: E0516 02:19:41.990135 1896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5670747ba0651898a88617f7cfd590bc8f747276616321b2acf390fad261eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngpwb" May 16 02:19:41.990559 kubelet[1896]: E0516 02:19:41.990174 1896 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5670747ba0651898a88617f7cfd590bc8f747276616321b2acf390fad261eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngpwb" May 16 02:19:41.990715 kubelet[1896]: E0516 02:19:41.990243 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ngpwb_calico-system(bbe556f2-8870-4f54-8884-f3b8b2556667)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ngpwb_calico-system(bbe556f2-8870-4f54-8884-f3b8b2556667)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5670747ba0651898a88617f7cfd590bc8f747276616321b2acf390fad261eec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ngpwb" podUID="bbe556f2-8870-4f54-8884-f3b8b2556667" May 16 02:19:42.735170 kubelet[1896]: E0516 02:19:42.734931 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:43.711672 kubelet[1896]: E0516 02:19:43.711488 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:43.736118 kubelet[1896]: E0516 02:19:43.736031 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:44.737083 kubelet[1896]: E0516 02:19:44.736909 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:45.738026 kubelet[1896]: E0516 02:19:45.737878 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:46.738889 kubelet[1896]: E0516 02:19:46.738826 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:47.740349 kubelet[1896]: E0516 02:19:47.740217 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:48.741650 kubelet[1896]: E0516 02:19:48.741468 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:49.520530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055930212.mount: Deactivated successfully. May 16 02:19:49.627046 containerd[1485]: time="2025-05-16T02:19:49.626813825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:49.631831 containerd[1485]: time="2025-05-16T02:19:49.630445892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 16 02:19:49.634228 containerd[1485]: time="2025-05-16T02:19:49.634143585Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:49.639152 containerd[1485]: time="2025-05-16T02:19:49.639095286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:49.640886 containerd[1485]: time="2025-05-16T02:19:49.640733329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 8.684012256s" May 16 02:19:49.642185 containerd[1485]: time="2025-05-16T02:19:49.640981129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 16 02:19:49.688920 containerd[1485]: time="2025-05-16T02:19:49.687292314Z" level=info msg="CreateContainer within sandbox \"c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 02:19:49.714022 containerd[1485]: time="2025-05-16T02:19:49.708671680Z" level=info msg="Container 5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b: CDI devices from CRI Config.CDIDevices: []" May 16 02:19:49.729668 containerd[1485]: time="2025-05-16T02:19:49.729633314Z" level=info msg="CreateContainer within sandbox \"c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\"" May 16 02:19:49.731374 containerd[1485]: time="2025-05-16T02:19:49.731353687Z" level=info msg="StartContainer for \"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\"" May 16 02:19:49.733712 containerd[1485]: time="2025-05-16T02:19:49.733674927Z" level=info msg="connecting to shim 5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b" address="unix:///run/containerd/s/14e4974d2d4ea702c29b9d3e9f5fa56d2e07c3ee7bfca8a0599f3c8720ff502b" protocol=ttrpc version=3 May 16 02:19:49.742934 kubelet[1896]: E0516 02:19:49.742829 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:49.783959 systemd[1]: Started cri-containerd-5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b.scope - libcontainer container 5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b. May 16 02:19:49.839801 containerd[1485]: time="2025-05-16T02:19:49.837835942Z" level=info msg="StartContainer for \"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" returns successfully" May 16 02:19:49.929353 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 02:19:49.930906 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 02:19:50.152129 containerd[1485]: time="2025-05-16T02:19:50.151953051Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"f53c1ae30b5f66c340bee06e9866864cd710656bf48208c20c0bc763a584d3ab\" pid:2478 exit_status:1 exited_at:{seconds:1747361990 nanos:151149463}" May 16 02:19:50.743547 kubelet[1896]: E0516 02:19:50.743336 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:51.144737 containerd[1485]: time="2025-05-16T02:19:51.144490226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"5e5126947764f9a25153298ff819cf7a943de36ecdff75659f4b153555a4dc25\" pid:2520 exit_status:1 exited_at:{seconds:1747361991 nanos:143879414}" May 16 02:19:51.735817 kernel: bpftool[2654]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 16 02:19:51.744208 kubelet[1896]: E0516 02:19:51.744149 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:52.102098 systemd-networkd[1388]: vxlan.calico: Link UP May 16 02:19:52.102109 systemd-networkd[1388]: vxlan.calico: Gained carrier May 16 02:19:52.744663 kubelet[1896]: E0516 02:19:52.744566 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:52.843984 containerd[1485]: time="2025-05-16T02:19:52.843886160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngpwb,Uid:bbe556f2-8870-4f54-8884-f3b8b2556667,Namespace:calico-system,Attempt:0,}" May 16 02:19:53.061884 systemd-networkd[1388]: cali4c8bdbd07af: Link UP May 16 02:19:53.064141 systemd-networkd[1388]: cali4c8bdbd07af: Gained carrier May 16 02:19:53.090215 kubelet[1896]: I0516 02:19:53.089166 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rxltt" podStartSLOduration=6.5350173609999995 podStartE2EDuration="29.08906332s" podCreationTimestamp="2025-05-16 02:19:24 +0000 UTC" firstStartedPulling="2025-05-16 02:19:27.091106906 +0000 UTC m=+4.091550560" lastFinishedPulling="2025-05-16 02:19:49.645152825 +0000 UTC m=+26.645596519" observedRunningTime="2025-05-16 02:19:50.071889429 +0000 UTC m=+27.072333093" watchObservedRunningTime="2025-05-16 02:19:53.08906332 +0000 UTC m=+30.089507015" May 16 02:19:53.090923 containerd[1485]: 2025-05-16 02:19:52.910 [INFO][2726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-csi--node--driver--ngpwb-eth0 csi-node-driver- calico-system bbe556f2-8870-4f54-8884-f3b8b2556667 1398 0 2025-05-16 02:19:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.70 csi-node-driver-ngpwb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4c8bdbd07af [] [] }} ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Namespace="calico-system" Pod="csi-node-driver-ngpwb" WorkloadEndpoint="172.24.4.70-k8s-csi--node--driver--ngpwb-" May 16 02:19:53.090923 containerd[1485]: 2025-05-16 02:19:52.910 [INFO][2726] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Namespace="calico-system" Pod="csi-node-driver-ngpwb" WorkloadEndpoint="172.24.4.70-k8s-csi--node--driver--ngpwb-eth0" May 16 02:19:53.090923 containerd[1485]: 2025-05-16 02:19:52.975 [INFO][2738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" HandleID="k8s-pod-network.cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Workload="172.24.4.70-k8s-csi--node--driver--ngpwb-eth0" May 16 02:19:53.091490 containerd[1485]: 2025-05-16 02:19:52.975 [INFO][2738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" HandleID="k8s-pod-network.cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Workload="172.24.4.70-k8s-csi--node--driver--ngpwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3c20), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.70", "pod":"csi-node-driver-ngpwb", "timestamp":"2025-05-16 02:19:52.975620233 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:19:53.091490 containerd[1485]: 2025-05-16 02:19:52.976 [INFO][2738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:19:53.091490 containerd[1485]: 2025-05-16 02:19:52.976 [INFO][2738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:19:53.091490 containerd[1485]: 2025-05-16 02:19:52.976 [INFO][2738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:19:53.091490 containerd[1485]: 2025-05-16 02:19:52.993 [INFO][2738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" host="172.24.4.70" May 16 02:19:53.091490 containerd[1485]: 2025-05-16 02:19:53.004 [INFO][2738] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:19:53.091490 containerd[1485]: 2025-05-16 02:19:53.014 [INFO][2738] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:19:53.091490 containerd[1485]: 2025-05-16 02:19:53.019 [INFO][2738] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:19:53.091490 containerd[1485]: 2025-05-16 02:19:53.024 [INFO][2738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:19:53.091490 containerd[1485]: 2025-05-16 02:19:53.024 [INFO][2738] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" host="172.24.4.70" May 16 02:19:53.094087 containerd[1485]: 2025-05-16 02:19:53.027 [INFO][2738] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189 May 16 02:19:53.094087 containerd[1485]: 2025-05-16 02:19:53.036 [INFO][2738] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" host="172.24.4.70" May 16 02:19:53.094087 containerd[1485]: 2025-05-16 02:19:53.054 [INFO][2738] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.129/26] block=192.168.76.128/26 handle="k8s-pod-network.cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" host="172.24.4.70" May 16 02:19:53.094087 containerd[1485]: 2025-05-16 02:19:53.054 [INFO][2738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.129/26] handle="k8s-pod-network.cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" host="172.24.4.70" May 16 02:19:53.094087 containerd[1485]: 2025-05-16 02:19:53.054 [INFO][2738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:19:53.094087 containerd[1485]: 2025-05-16 02:19:53.054 [INFO][2738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.129/26] IPv6=[] ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" HandleID="k8s-pod-network.cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Workload="172.24.4.70-k8s-csi--node--driver--ngpwb-eth0" May 16 02:19:53.095479 containerd[1485]: 2025-05-16 02:19:53.057 [INFO][2726] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Namespace="calico-system" Pod="csi-node-driver-ngpwb" WorkloadEndpoint="172.24.4.70-k8s-csi--node--driver--ngpwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-csi--node--driver--ngpwb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bbe556f2-8870-4f54-8884-f3b8b2556667", ResourceVersion:"1398", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 19, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"csi-node-driver-ngpwb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4c8bdbd07af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:19:53.096074 containerd[1485]: 2025-05-16 02:19:53.058 [INFO][2726] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.129/32] ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Namespace="calico-system" Pod="csi-node-driver-ngpwb" WorkloadEndpoint="172.24.4.70-k8s-csi--node--driver--ngpwb-eth0" May 16 02:19:53.096074 containerd[1485]: 2025-05-16 02:19:53.058 [INFO][2726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c8bdbd07af ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Namespace="calico-system" Pod="csi-node-driver-ngpwb" WorkloadEndpoint="172.24.4.70-k8s-csi--node--driver--ngpwb-eth0" May 16 02:19:53.096074 containerd[1485]: 2025-05-16 02:19:53.065 [INFO][2726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Namespace="calico-system" Pod="csi-node-driver-ngpwb" WorkloadEndpoint="172.24.4.70-k8s-csi--node--driver--ngpwb-eth0" May 16 02:19:53.096530 containerd[1485]: 2025-05-16 02:19:53.066 [INFO][2726] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Namespace="calico-system" Pod="csi-node-driver-ngpwb" WorkloadEndpoint="172.24.4.70-k8s-csi--node--driver--ngpwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-csi--node--driver--ngpwb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bbe556f2-8870-4f54-8884-f3b8b2556667", ResourceVersion:"1398", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 19, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189", Pod:"csi-node-driver-ngpwb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4c8bdbd07af", MAC:"46:fb:75:ae:87:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:19:53.096732 containerd[1485]: 2025-05-16 02:19:53.088 [INFO][2726] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" Namespace="calico-system" Pod="csi-node-driver-ngpwb" WorkloadEndpoint="172.24.4.70-k8s-csi--node--driver--ngpwb-eth0" May 16 02:19:53.138301 containerd[1485]: time="2025-05-16T02:19:53.138236532Z" level=info msg="connecting to shim cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189" address="unix:///run/containerd/s/7073c9217238c0d73a0e5700ca0b06dd5e080a347ca937d5075b241a3892e814" namespace=k8s.io protocol=ttrpc version=3 May 16 02:19:53.175943 systemd[1]: Started cri-containerd-cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189.scope - libcontainer container cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189. May 16 02:19:53.204373 containerd[1485]: time="2025-05-16T02:19:53.204304809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngpwb,Uid:bbe556f2-8870-4f54-8884-f3b8b2556667,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189\"" May 16 02:19:53.207150 containerd[1485]: time="2025-05-16T02:19:53.207118942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 16 02:19:53.745373 kubelet[1896]: E0516 02:19:53.745276 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:53.833403 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL May 16 02:19:54.281415 systemd-networkd[1388]: cali4c8bdbd07af: Gained IPv6LL May 16 02:19:54.746202 kubelet[1896]: E0516 02:19:54.746127 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:55.746912 kubelet[1896]: E0516 02:19:55.746809 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:56.245410 containerd[1485]: time="2025-05-16T02:19:56.245362201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:56.247348 containerd[1485]: time="2025-05-16T02:19:56.247260663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 16 02:19:56.248510 containerd[1485]: time="2025-05-16T02:19:56.248461837Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:56.251003 containerd[1485]: time="2025-05-16T02:19:56.250955581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:56.251601 containerd[1485]: time="2025-05-16T02:19:56.251555582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 3.044402806s" May 16 02:19:56.251647 containerd[1485]: time="2025-05-16T02:19:56.251602202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 16 02:19:56.254637 containerd[1485]: time="2025-05-16T02:19:56.254594422Z" level=info msg="CreateContainer within sandbox \"cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 02:19:56.269815 containerd[1485]: time="2025-05-16T02:19:56.268367524Z" level=info msg="Container 70cd8838d16b387ccd1b75e8a1738085d6e8f2117e7232fc2b52a7296ce2a5fa: CDI devices from CRI Config.CDIDevices: []" May 16 02:19:56.283893 containerd[1485]: time="2025-05-16T02:19:56.283271034Z" level=info msg="CreateContainer within sandbox \"cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"70cd8838d16b387ccd1b75e8a1738085d6e8f2117e7232fc2b52a7296ce2a5fa\"" May 16 02:19:56.284112 containerd[1485]: time="2025-05-16T02:19:56.284077220Z" level=info msg="StartContainer for \"70cd8838d16b387ccd1b75e8a1738085d6e8f2117e7232fc2b52a7296ce2a5fa\"" May 16 02:19:56.285689 containerd[1485]: time="2025-05-16T02:19:56.285650138Z" level=info msg="connecting to shim 70cd8838d16b387ccd1b75e8a1738085d6e8f2117e7232fc2b52a7296ce2a5fa" address="unix:///run/containerd/s/7073c9217238c0d73a0e5700ca0b06dd5e080a347ca937d5075b241a3892e814" protocol=ttrpc version=3 May 16 02:19:56.311933 systemd[1]: Started cri-containerd-70cd8838d16b387ccd1b75e8a1738085d6e8f2117e7232fc2b52a7296ce2a5fa.scope - libcontainer container 70cd8838d16b387ccd1b75e8a1738085d6e8f2117e7232fc2b52a7296ce2a5fa. May 16 02:19:56.356463 containerd[1485]: time="2025-05-16T02:19:56.356420149Z" level=info msg="StartContainer for \"70cd8838d16b387ccd1b75e8a1738085d6e8f2117e7232fc2b52a7296ce2a5fa\" returns successfully" May 16 02:19:56.357911 containerd[1485]: time="2025-05-16T02:19:56.357884708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 16 02:19:56.747183 kubelet[1896]: E0516 02:19:56.747030 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:57.748416 kubelet[1896]: E0516 02:19:57.748284 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:58.749098 kubelet[1896]: E0516 02:19:58.748476 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:58.970667 containerd[1485]: time="2025-05-16T02:19:58.970599309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:58.972241 containerd[1485]: time="2025-05-16T02:19:58.972065052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 16 02:19:58.974238 containerd[1485]: time="2025-05-16T02:19:58.973626610Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:58.977103 containerd[1485]: time="2025-05-16T02:19:58.977068836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:19:58.977641 containerd[1485]: time="2025-05-16T02:19:58.977608718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.619619379s" May 16 02:19:58.977702 containerd[1485]: time="2025-05-16T02:19:58.977644367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 16 02:19:58.980160 containerd[1485]: time="2025-05-16T02:19:58.980134752Z" level=info msg="CreateContainer within sandbox \"cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 02:19:58.995026 containerd[1485]: time="2025-05-16T02:19:58.994975634Z" level=info msg="Container bb187e8e938223baccfbdb85aa0702bb1ccb176e284c4bef446dcc62c48e9bea: CDI devices from CRI Config.CDIDevices: []" May 16 02:19:59.011858 containerd[1485]: time="2025-05-16T02:19:59.011465546Z" level=info msg="CreateContainer within sandbox \"cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bb187e8e938223baccfbdb85aa0702bb1ccb176e284c4bef446dcc62c48e9bea\"" May 16 02:19:59.012875 containerd[1485]: time="2025-05-16T02:19:59.012852036Z" level=info msg="StartContainer for \"bb187e8e938223baccfbdb85aa0702bb1ccb176e284c4bef446dcc62c48e9bea\"" May 16 02:19:59.017189 containerd[1485]: time="2025-05-16T02:19:59.017148135Z" level=info msg="connecting to shim bb187e8e938223baccfbdb85aa0702bb1ccb176e284c4bef446dcc62c48e9bea" address="unix:///run/containerd/s/7073c9217238c0d73a0e5700ca0b06dd5e080a347ca937d5075b241a3892e814" protocol=ttrpc version=3 May 16 02:19:59.040917 systemd[1]: Started cri-containerd-bb187e8e938223baccfbdb85aa0702bb1ccb176e284c4bef446dcc62c48e9bea.scope - libcontainer container bb187e8e938223baccfbdb85aa0702bb1ccb176e284c4bef446dcc62c48e9bea. May 16 02:19:59.096657 containerd[1485]: time="2025-05-16T02:19:59.096605132Z" level=info msg="StartContainer for \"bb187e8e938223baccfbdb85aa0702bb1ccb176e284c4bef446dcc62c48e9bea\" returns successfully" May 16 02:19:59.749580 kubelet[1896]: E0516 02:19:59.749483 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:19:59.873629 kubelet[1896]: I0516 02:19:59.873522 1896 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 02:19:59.874023 kubelet[1896]: I0516 02:19:59.873670 1896 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 02:20:00.103737 kubelet[1896]: I0516 02:20:00.102757 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ngpwb" podStartSLOduration=30.330388554 podStartE2EDuration="36.102719545s" podCreationTimestamp="2025-05-16 02:19:24 +0000 UTC" firstStartedPulling="2025-05-16 02:19:53.206207686 +0000 UTC m=+30.206651330" lastFinishedPulling="2025-05-16 02:19:58.978538677 +0000 UTC m=+35.978982321" observedRunningTime="2025-05-16 02:20:00.100090397 +0000 UTC m=+37.100534141" watchObservedRunningTime="2025-05-16 02:20:00.102719545 +0000 UTC m=+37.103163239" May 16 02:20:00.750714 kubelet[1896]: E0516 02:20:00.750628 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:01.751662 kubelet[1896]: E0516 02:20:01.751547 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:02.752929 kubelet[1896]: E0516 02:20:02.752808 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:03.712158 kubelet[1896]: E0516 02:20:03.712073 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:03.753692 kubelet[1896]: E0516 02:20:03.753626 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:04.754422 kubelet[1896]: E0516 02:20:04.754222 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:05.755081 kubelet[1896]: E0516 02:20:05.754995 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:06.756203 kubelet[1896]: E0516 02:20:06.756126 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:07.756863 kubelet[1896]: E0516 02:20:07.756742 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:08.757136 kubelet[1896]: E0516 02:20:08.757052 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:09.757576 kubelet[1896]: E0516 02:20:09.757498 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:10.758104 kubelet[1896]: E0516 02:20:10.758014 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:11.758380 kubelet[1896]: E0516 02:20:11.758269 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:12.759024 kubelet[1896]: E0516 02:20:12.758936 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:13.759831 kubelet[1896]: E0516 02:20:13.759653 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:14.760069 kubelet[1896]: E0516 02:20:14.759953 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:15.760274 kubelet[1896]: E0516 02:20:15.760192 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:16.761095 kubelet[1896]: E0516 02:20:16.760978 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:17.764017 kubelet[1896]: E0516 02:20:17.763014 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:18.764821 kubelet[1896]: E0516 02:20:18.764672 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:19.765998 kubelet[1896]: E0516 02:20:19.765878 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:20.766333 kubelet[1896]: E0516 02:20:20.766187 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:21.161313 containerd[1485]: time="2025-05-16T02:20:21.161057183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"748177cb2587ff10eb34d82e8201d2cb1bdeb4236924dcc7d0b65024d4048ad2\" pid:2913 exited_at:{seconds:1747362021 nanos:158616545}" May 16 02:20:21.767079 kubelet[1896]: E0516 02:20:21.766915 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:22.767852 kubelet[1896]: E0516 02:20:22.767710 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:23.711726 kubelet[1896]: E0516 02:20:23.711629 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:23.768299 kubelet[1896]: E0516 02:20:23.768227 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:24.769054 kubelet[1896]: E0516 02:20:24.768947 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:25.769916 kubelet[1896]: E0516 02:20:25.769686 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:26.770760 kubelet[1896]: E0516 02:20:26.770542 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:27.771888 kubelet[1896]: E0516 02:20:27.771736 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:28.772519 kubelet[1896]: E0516 02:20:28.772362 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:29.773491 kubelet[1896]: E0516 02:20:29.773351 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:30.774030 kubelet[1896]: E0516 02:20:30.773939 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:31.775308 kubelet[1896]: E0516 02:20:31.775184 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:32.775756 kubelet[1896]: E0516 02:20:32.775653 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:33.776469 kubelet[1896]: E0516 02:20:33.776364 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:34.777559 kubelet[1896]: E0516 02:20:34.777398 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:35.778678 kubelet[1896]: E0516 02:20:35.778583 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:36.779974 kubelet[1896]: E0516 02:20:36.779854 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:37.780545 kubelet[1896]: E0516 02:20:37.780423 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:38.781704 kubelet[1896]: E0516 02:20:38.781572 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:39.782701 kubelet[1896]: E0516 02:20:39.782611 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:40.783439 kubelet[1896]: E0516 02:20:40.783303 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:41.784603 kubelet[1896]: E0516 02:20:41.784475 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:42.785110 kubelet[1896]: E0516 02:20:42.785021 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:43.711314 kubelet[1896]: E0516 02:20:43.711220 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:43.786057 kubelet[1896]: E0516 02:20:43.785954 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:44.786892 kubelet[1896]: E0516 02:20:44.786746 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:45.787582 kubelet[1896]: E0516 02:20:45.787461 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:46.788747 kubelet[1896]: E0516 02:20:46.788633 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:47.789357 kubelet[1896]: E0516 02:20:47.789192 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:48.790043 kubelet[1896]: E0516 02:20:48.789897 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:49.790572 kubelet[1896]: E0516 02:20:49.790473 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:50.791035 kubelet[1896]: E0516 02:20:50.790935 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:51.203924 containerd[1485]: time="2025-05-16T02:20:51.203561969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"a422fe7de81059639a7a0cc7f2065a315942d0c829e30e517fbe922254d1525c\" pid:2948 exited_at:{seconds:1747362051 nanos:202831787}" May 16 02:20:51.791947 kubelet[1896]: E0516 02:20:51.791632 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:52.792662 kubelet[1896]: E0516 02:20:52.792569 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:53.793397 kubelet[1896]: E0516 02:20:53.793312 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:54.793878 kubelet[1896]: E0516 02:20:54.793725 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:55.794708 kubelet[1896]: E0516 02:20:55.794593 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:56.795328 kubelet[1896]: E0516 02:20:56.795211 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:57.796161 kubelet[1896]: E0516 02:20:57.796045 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:58.797172 kubelet[1896]: E0516 02:20:58.797067 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:20:59.798218 kubelet[1896]: E0516 02:20:59.797997 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:00.799381 kubelet[1896]: E0516 02:21:00.799074 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:01.799494 kubelet[1896]: E0516 02:21:01.799398 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:02.807217 kubelet[1896]: E0516 02:21:02.807079 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:03.712181 kubelet[1896]: E0516 02:21:03.712066 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:03.807931 kubelet[1896]: E0516 02:21:03.807836 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:04.808489 kubelet[1896]: E0516 02:21:04.808391 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:05.809657 kubelet[1896]: E0516 02:21:05.809532 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:06.810524 kubelet[1896]: E0516 02:21:06.810393 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:07.811448 kubelet[1896]: E0516 02:21:07.811357 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:08.812505 kubelet[1896]: E0516 02:21:08.812406 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:09.813647 kubelet[1896]: E0516 02:21:09.813486 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:10.814941 kubelet[1896]: E0516 02:21:10.814753 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:11.815416 kubelet[1896]: E0516 02:21:11.815332 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:12.815968 kubelet[1896]: E0516 02:21:12.815826 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:13.817060 kubelet[1896]: E0516 02:21:13.816943 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:14.817467 kubelet[1896]: E0516 02:21:14.817377 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:15.818178 kubelet[1896]: E0516 02:21:15.818066 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:16.818533 kubelet[1896]: E0516 02:21:16.818362 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:17.819212 kubelet[1896]: E0516 02:21:17.819081 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:18.820213 kubelet[1896]: E0516 02:21:18.820079 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:19.821126 kubelet[1896]: E0516 02:21:19.820981 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:20.822614 kubelet[1896]: E0516 02:21:20.822370 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:21.206837 containerd[1485]: time="2025-05-16T02:21:21.206502508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"b87b497d44eacc21e30f2d0ef5b08a061b3012ec058f745505c1245a9f6de74f\" pid:2981 exited_at:{seconds:1747362081 nanos:203741136}" May 16 02:21:21.823190 kubelet[1896]: E0516 02:21:21.823109 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:22.824191 kubelet[1896]: E0516 02:21:22.824096 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:23.711493 kubelet[1896]: E0516 02:21:23.711386 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:23.824458 kubelet[1896]: E0516 02:21:23.824352 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:24.825873 kubelet[1896]: E0516 02:21:24.825532 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:25.826024 kubelet[1896]: E0516 02:21:25.825939 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:26.827037 kubelet[1896]: E0516 02:21:26.826940 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:27.827310 kubelet[1896]: E0516 02:21:27.827215 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:28.828124 kubelet[1896]: E0516 02:21:28.828014 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:29.829453 kubelet[1896]: E0516 02:21:29.829257 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:30.830056 kubelet[1896]: E0516 02:21:30.829958 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:31.831202 kubelet[1896]: E0516 02:21:31.831008 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:32.831551 kubelet[1896]: E0516 02:21:32.831436 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:33.832659 kubelet[1896]: E0516 02:21:33.832377 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:34.833186 kubelet[1896]: E0516 02:21:34.832999 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:35.833459 kubelet[1896]: E0516 02:21:35.833353 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:36.834465 kubelet[1896]: E0516 02:21:36.834361 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:37.835420 kubelet[1896]: E0516 02:21:37.835320 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:38.836599 kubelet[1896]: E0516 02:21:38.836340 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:39.837895 kubelet[1896]: E0516 02:21:39.837729 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:40.838483 kubelet[1896]: E0516 02:21:40.838362 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:41.839210 kubelet[1896]: E0516 02:21:41.839070 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:42.840131 kubelet[1896]: E0516 02:21:42.839982 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:43.712167 kubelet[1896]: E0516 02:21:43.712068 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:43.840366 kubelet[1896]: E0516 02:21:43.840189 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:44.840734 kubelet[1896]: E0516 02:21:44.840615 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:45.841203 kubelet[1896]: E0516 02:21:45.841045 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:46.842386 kubelet[1896]: E0516 02:21:46.842315 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:47.842666 kubelet[1896]: E0516 02:21:47.842484 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:48.843903 kubelet[1896]: E0516 02:21:48.843640 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:49.844405 kubelet[1896]: E0516 02:21:49.844257 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:50.845062 kubelet[1896]: E0516 02:21:50.844936 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:51.202895 containerd[1485]: time="2025-05-16T02:21:51.202447238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"692db577c073fc3ca44593a55d5660aaebe385aeeb1fc3877c21a6c5beabf387\" pid:3020 exited_at:{seconds:1747362111 nanos:200474306}" May 16 02:21:51.845755 kubelet[1896]: E0516 02:21:51.845517 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:52.848150 kubelet[1896]: E0516 02:21:52.847543 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:53.848248 kubelet[1896]: E0516 02:21:53.848063 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:54.849284 kubelet[1896]: E0516 02:21:54.849157 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:55.850371 kubelet[1896]: E0516 02:21:55.850258 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:56.851529 kubelet[1896]: E0516 02:21:56.851221 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:57.851877 kubelet[1896]: E0516 02:21:57.851656 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:58.852128 kubelet[1896]: E0516 02:21:58.852023 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:21:59.853227 kubelet[1896]: E0516 02:21:59.853110 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:00.854222 kubelet[1896]: E0516 02:22:00.854100 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:01.854650 kubelet[1896]: E0516 02:22:01.854534 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:02.855133 kubelet[1896]: E0516 02:22:02.855017 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:03.712184 kubelet[1896]: E0516 02:22:03.712044 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:03.855757 kubelet[1896]: E0516 02:22:03.855669 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:04.857197 kubelet[1896]: E0516 02:22:04.857011 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:05.857641 kubelet[1896]: E0516 02:22:05.857492 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:06.858266 kubelet[1896]: E0516 02:22:06.858118 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:07.858684 kubelet[1896]: E0516 02:22:07.858567 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:08.859829 kubelet[1896]: E0516 02:22:08.859688 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:09.860825 kubelet[1896]: E0516 02:22:09.860669 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:10.861699 kubelet[1896]: E0516 02:22:10.861604 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:11.862254 kubelet[1896]: E0516 02:22:11.862169 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:12.863402 kubelet[1896]: E0516 02:22:12.863256 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:13.864077 kubelet[1896]: E0516 02:22:13.863986 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:14.864676 kubelet[1896]: E0516 02:22:14.864568 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:15.865711 kubelet[1896]: E0516 02:22:15.865594 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:16.866664 kubelet[1896]: E0516 02:22:16.866550 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:17.867595 kubelet[1896]: E0516 02:22:17.867479 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:18.868389 kubelet[1896]: E0516 02:22:18.868176 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:19.869301 kubelet[1896]: E0516 02:22:19.869188 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:20.869985 kubelet[1896]: E0516 02:22:20.869842 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:21.218372 containerd[1485]: time="2025-05-16T02:22:21.218249231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"022efc533547a480516fe58ad1c5c421123bc8d6681e21257bb7ee0e55336cc7\" pid:3047 exited_at:{seconds:1747362141 nanos:217396500}" May 16 02:22:21.870858 kubelet[1896]: E0516 02:22:21.870756 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:22.871628 kubelet[1896]: E0516 02:22:22.871511 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:23.711831 kubelet[1896]: E0516 02:22:23.711697 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:23.872215 kubelet[1896]: E0516 02:22:23.872132 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:24.872647 kubelet[1896]: E0516 02:22:24.872484 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:25.873544 kubelet[1896]: E0516 02:22:25.873421 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:26.874570 kubelet[1896]: E0516 02:22:26.874474 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:27.875164 kubelet[1896]: E0516 02:22:27.875083 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:28.876267 kubelet[1896]: E0516 02:22:28.876189 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:29.876919 kubelet[1896]: E0516 02:22:29.876830 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:30.879116 kubelet[1896]: E0516 02:22:30.879026 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:31.880193 kubelet[1896]: E0516 02:22:31.880029 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:32.880543 kubelet[1896]: E0516 02:22:32.880360 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:33.881700 kubelet[1896]: E0516 02:22:33.881592 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:34.882551 kubelet[1896]: E0516 02:22:34.882437 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:35.883465 kubelet[1896]: E0516 02:22:35.883384 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:36.884360 kubelet[1896]: E0516 02:22:36.884262 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:37.885429 kubelet[1896]: E0516 02:22:37.885325 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:38.886251 kubelet[1896]: E0516 02:22:38.886128 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:39.886407 kubelet[1896]: E0516 02:22:39.886313 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:40.887523 kubelet[1896]: E0516 02:22:40.887432 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:41.888016 kubelet[1896]: E0516 02:22:41.887887 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:42.888814 kubelet[1896]: E0516 02:22:42.888628 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:43.711296 kubelet[1896]: E0516 02:22:43.711184 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:43.889478 kubelet[1896]: E0516 02:22:43.889314 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:44.890367 kubelet[1896]: E0516 02:22:44.890185 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:45.891363 kubelet[1896]: E0516 02:22:45.891245 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:46.891969 kubelet[1896]: E0516 02:22:46.891871 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:47.893179 kubelet[1896]: E0516 02:22:47.893059 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:48.894153 kubelet[1896]: E0516 02:22:48.894049 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:49.894943 kubelet[1896]: E0516 02:22:49.894828 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:50.895587 kubelet[1896]: E0516 02:22:50.895410 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:51.208390 containerd[1485]: time="2025-05-16T02:22:51.208081503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"b64e5a9b86784f214df0250e4b18945e9fe837e7395299925dba84ff77a9e82d\" pid:3083 exited_at:{seconds:1747362171 nanos:207601272}" May 16 02:22:51.896196 kubelet[1896]: E0516 02:22:51.896101 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:52.897082 kubelet[1896]: E0516 02:22:52.896966 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:53.898182 kubelet[1896]: E0516 02:22:53.898119 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:54.900355 kubelet[1896]: E0516 02:22:54.900242 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:55.900901 kubelet[1896]: E0516 02:22:55.900837 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:56.901351 kubelet[1896]: E0516 02:22:56.901249 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:57.902529 kubelet[1896]: E0516 02:22:57.902370 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:58.903481 kubelet[1896]: E0516 02:22:58.903394 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:22:59.903738 kubelet[1896]: E0516 02:22:59.903597 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:00.904104 kubelet[1896]: E0516 02:23:00.904025 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:01.905202 kubelet[1896]: E0516 02:23:01.905064 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:02.905387 kubelet[1896]: E0516 02:23:02.905298 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:03.711475 kubelet[1896]: E0516 02:23:03.711380 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:03.905711 kubelet[1896]: E0516 02:23:03.905604 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:04.906567 kubelet[1896]: E0516 02:23:04.906449 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:05.907672 kubelet[1896]: E0516 02:23:05.907537 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:06.908628 kubelet[1896]: E0516 02:23:06.908463 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:07.909680 kubelet[1896]: E0516 02:23:07.909596 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:08.910429 kubelet[1896]: E0516 02:23:08.910324 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:09.910984 kubelet[1896]: E0516 02:23:09.910860 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:10.911532 kubelet[1896]: E0516 02:23:10.911426 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:11.912020 kubelet[1896]: E0516 02:23:11.911938 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:12.912836 kubelet[1896]: E0516 02:23:12.912643 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:13.920392 kubelet[1896]: E0516 02:23:13.912958 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:14.914125 kubelet[1896]: E0516 02:23:14.914033 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:15.914435 kubelet[1896]: E0516 02:23:15.914328 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:16.915406 kubelet[1896]: E0516 02:23:16.915320 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:17.916738 kubelet[1896]: E0516 02:23:17.916431 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:18.917468 kubelet[1896]: E0516 02:23:18.917253 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:19.918480 kubelet[1896]: E0516 02:23:19.918377 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:20.919267 kubelet[1896]: E0516 02:23:20.919120 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:21.213394 containerd[1485]: time="2025-05-16T02:23:21.213316985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"6640e96cb93620c89e1600ed01f9509e2248260ea7d91aa706d8494a226c7882\" pid:3118 exited_at:{seconds:1747362201 nanos:212714494}" May 16 02:23:21.919891 kubelet[1896]: E0516 02:23:21.919761 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:22.920478 kubelet[1896]: E0516 02:23:22.920378 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:23.712023 kubelet[1896]: E0516 02:23:23.711884 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:23.921253 kubelet[1896]: E0516 02:23:23.921180 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:24.921563 kubelet[1896]: E0516 02:23:24.921440 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:25.922048 kubelet[1896]: E0516 02:23:25.921917 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:26.928028 kubelet[1896]: E0516 02:23:26.927338 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:27.928891 kubelet[1896]: E0516 02:23:27.928673 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:28.930109 kubelet[1896]: E0516 02:23:28.929994 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:29.931391 kubelet[1896]: E0516 02:23:29.931294 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:30.932461 kubelet[1896]: E0516 02:23:30.932336 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:31.932726 kubelet[1896]: E0516 02:23:31.932622 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:32.934484 kubelet[1896]: E0516 02:23:32.934400 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:33.934896 kubelet[1896]: E0516 02:23:33.934817 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:34.935601 kubelet[1896]: E0516 02:23:34.935409 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:35.936599 kubelet[1896]: E0516 02:23:35.936476 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:36.937808 kubelet[1896]: E0516 02:23:36.937633 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:37.938927 kubelet[1896]: E0516 02:23:37.938821 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:38.939490 kubelet[1896]: E0516 02:23:38.939359 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:39.940318 kubelet[1896]: E0516 02:23:39.940127 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:40.341287 systemd[1]: Created slice kubepods-besteffort-pod59e793f8_3b09_4c17_9896_e700bc5ae059.slice - libcontainer container kubepods-besteffort-pod59e793f8_3b09_4c17_9896_e700bc5ae059.slice. May 16 02:23:40.444864 kubelet[1896]: I0516 02:23:40.444690 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6jx\" (UniqueName: \"kubernetes.io/projected/59e793f8-3b09-4c17-9896-e700bc5ae059-kube-api-access-7s6jx\") pod \"nginx-deployment-7fcdb87857-m25wv\" (UID: \"59e793f8-3b09-4c17-9896-e700bc5ae059\") " pod="default/nginx-deployment-7fcdb87857-m25wv" May 16 02:23:40.654033 containerd[1485]: time="2025-05-16T02:23:40.653621722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m25wv,Uid:59e793f8-3b09-4c17-9896-e700bc5ae059,Namespace:default,Attempt:0,}" May 16 02:23:40.942090 kubelet[1896]: E0516 02:23:40.940441 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:41.119100 systemd-networkd[1388]: calif8ae657dcaf: Link UP May 16 02:23:41.119944 systemd-networkd[1388]: calif8ae657dcaf: Gained carrier May 16 02:23:41.140320 containerd[1485]: 2025-05-16 02:23:40.912 [INFO][3137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0 nginx-deployment-7fcdb87857- default 59e793f8-3b09-4c17-9896-e700bc5ae059 1970 0 2025-05-16 02:23:40 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.70 nginx-deployment-7fcdb87857-m25wv eth0 default [] [] [kns.default ksa.default.default] calif8ae657dcaf [] [] }} ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Namespace="default" Pod="nginx-deployment-7fcdb87857-m25wv" WorkloadEndpoint="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-" May 16 02:23:41.140320 containerd[1485]: 2025-05-16 02:23:40.912 [INFO][3137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Namespace="default" Pod="nginx-deployment-7fcdb87857-m25wv" WorkloadEndpoint="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0" May 16 02:23:41.140320 containerd[1485]: 2025-05-16 02:23:40.998 [INFO][3149] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" HandleID="k8s-pod-network.c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Workload="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0" May 16 02:23:41.141657 containerd[1485]: 2025-05-16 02:23:41.000 [INFO][3149] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" HandleID="k8s-pod-network.c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Workload="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d270), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.70", "pod":"nginx-deployment-7fcdb87857-m25wv", "timestamp":"2025-05-16 02:23:40.998267473 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:23:41.141657 containerd[1485]: 2025-05-16 02:23:41.000 [INFO][3149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:23:41.141657 containerd[1485]: 2025-05-16 02:23:41.001 [INFO][3149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:23:41.141657 containerd[1485]: 2025-05-16 02:23:41.001 [INFO][3149] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:23:41.141657 containerd[1485]: 2025-05-16 02:23:41.018 [INFO][3149] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" host="172.24.4.70" May 16 02:23:41.141657 containerd[1485]: 2025-05-16 02:23:41.031 [INFO][3149] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:23:41.141657 containerd[1485]: 2025-05-16 02:23:41.053 [INFO][3149] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:23:41.141657 containerd[1485]: 2025-05-16 02:23:41.059 [INFO][3149] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:23:41.141657 containerd[1485]: 2025-05-16 02:23:41.066 [INFO][3149] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:23:41.141657 containerd[1485]: 2025-05-16 02:23:41.066 [INFO][3149] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" host="172.24.4.70" May 16 02:23:41.143412 containerd[1485]: 2025-05-16 02:23:41.073 [INFO][3149] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe May 16 02:23:41.143412 containerd[1485]: 2025-05-16 02:23:41.087 [INFO][3149] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" host="172.24.4.70" May 16 02:23:41.143412 containerd[1485]: 2025-05-16 02:23:41.103 [INFO][3149] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.130/26] block=192.168.76.128/26 handle="k8s-pod-network.c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" host="172.24.4.70" May 16 02:23:41.143412 containerd[1485]: 2025-05-16 02:23:41.104 [INFO][3149] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.130/26] handle="k8s-pod-network.c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" host="172.24.4.70" May 16 02:23:41.143412 containerd[1485]: 2025-05-16 02:23:41.104 [INFO][3149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:23:41.143412 containerd[1485]: 2025-05-16 02:23:41.104 [INFO][3149] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.130/26] IPv6=[] ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" HandleID="k8s-pod-network.c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Workload="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0" May 16 02:23:41.145278 containerd[1485]: 2025-05-16 02:23:41.106 [INFO][3137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Namespace="default" Pod="nginx-deployment-7fcdb87857-m25wv" WorkloadEndpoint="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"59e793f8-3b09-4c17-9896-e700bc5ae059", ResourceVersion:"1970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-m25wv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif8ae657dcaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:23:41.145278 containerd[1485]: 2025-05-16 02:23:41.107 [INFO][3137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.130/32] ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Namespace="default" Pod="nginx-deployment-7fcdb87857-m25wv" WorkloadEndpoint="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0" May 16 02:23:41.145586 containerd[1485]: 2025-05-16 02:23:41.107 [INFO][3137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8ae657dcaf ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Namespace="default" Pod="nginx-deployment-7fcdb87857-m25wv" WorkloadEndpoint="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0" May 16 02:23:41.145586 containerd[1485]: 2025-05-16 02:23:41.120 [INFO][3137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Namespace="default" Pod="nginx-deployment-7fcdb87857-m25wv" WorkloadEndpoint="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0" May 16 02:23:41.145893 containerd[1485]: 2025-05-16 02:23:41.121 [INFO][3137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Namespace="default" Pod="nginx-deployment-7fcdb87857-m25wv" WorkloadEndpoint="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"59e793f8-3b09-4c17-9896-e700bc5ae059", ResourceVersion:"1970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe", Pod:"nginx-deployment-7fcdb87857-m25wv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif8ae657dcaf", MAC:"ca:21:e2:a5:d2:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:23:41.146086 containerd[1485]: 2025-05-16 02:23:41.138 [INFO][3137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" Namespace="default" Pod="nginx-deployment-7fcdb87857-m25wv" WorkloadEndpoint="172.24.4.70-k8s-nginx--deployment--7fcdb87857--m25wv-eth0" May 16 02:23:41.205142 containerd[1485]: time="2025-05-16T02:23:41.204477905Z" level=info msg="connecting to shim c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe" address="unix:///run/containerd/s/58fbce0568c9d9bb7e485f42ac8cd8a9bca445b036af903d083d17a1a21d17a5" namespace=k8s.io protocol=ttrpc version=3 May 16 02:23:41.246976 systemd[1]: Started cri-containerd-c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe.scope - libcontainer container c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe. May 16 02:23:41.314159 containerd[1485]: time="2025-05-16T02:23:41.314091270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m25wv,Uid:59e793f8-3b09-4c17-9896-e700bc5ae059,Namespace:default,Attempt:0,} returns sandbox id \"c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe\"" May 16 02:23:41.316447 containerd[1485]: time="2025-05-16T02:23:41.316405604Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 02:23:41.941602 kubelet[1896]: E0516 02:23:41.941082 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:42.635185 systemd-networkd[1388]: calif8ae657dcaf: Gained IPv6LL May 16 02:23:42.942523 kubelet[1896]: E0516 02:23:42.942180 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:43.711670 kubelet[1896]: E0516 02:23:43.711592 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:43.944227 kubelet[1896]: E0516 02:23:43.943436 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:44.943990 kubelet[1896]: E0516 02:23:44.943912 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:45.365703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823875305.mount: Deactivated successfully. May 16 02:23:45.945448 kubelet[1896]: E0516 02:23:45.945396 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:46.849490 containerd[1485]: time="2025-05-16T02:23:46.849337002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:23:46.853485 containerd[1485]: time="2025-05-16T02:23:46.853300872Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73306220" May 16 02:23:46.855379 containerd[1485]: time="2025-05-16T02:23:46.855284614Z" level=info msg="ImageCreate event name:\"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:23:46.863558 containerd[1485]: time="2025-05-16T02:23:46.863418118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:23:46.866628 containerd[1485]: time="2025-05-16T02:23:46.866331806Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 5.549843357s" May 16 02:23:46.866628 containerd[1485]: time="2025-05-16T02:23:46.866407468Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 16 02:23:46.876531 containerd[1485]: time="2025-05-16T02:23:46.876430969Z" level=info msg="CreateContainer within sandbox \"c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 16 02:23:46.921959 containerd[1485]: time="2025-05-16T02:23:46.908065658Z" level=info msg="Container 35f4363038220c99d67afbff89b241cc4bcd45bd0d3ec2d7264dbe84a54e8fd3: CDI devices from CRI Config.CDIDevices: []" May 16 02:23:46.928008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2817208119.mount: Deactivated successfully. May 16 02:23:46.944232 containerd[1485]: time="2025-05-16T02:23:46.944189491Z" level=info msg="CreateContainer within sandbox \"c43b6f0f72da6099d2e11a79f9d7d8b03759ff17694dd46f63d628dfde7d26fe\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"35f4363038220c99d67afbff89b241cc4bcd45bd0d3ec2d7264dbe84a54e8fd3\"" May 16 02:23:46.945724 kubelet[1896]: E0516 02:23:46.945670 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:46.946119 containerd[1485]: time="2025-05-16T02:23:46.946014025Z" level=info msg="StartContainer for \"35f4363038220c99d67afbff89b241cc4bcd45bd0d3ec2d7264dbe84a54e8fd3\"" May 16 02:23:46.948662 containerd[1485]: time="2025-05-16T02:23:46.948570222Z" level=info msg="connecting to shim 35f4363038220c99d67afbff89b241cc4bcd45bd0d3ec2d7264dbe84a54e8fd3" address="unix:///run/containerd/s/58fbce0568c9d9bb7e485f42ac8cd8a9bca445b036af903d083d17a1a21d17a5" protocol=ttrpc version=3 May 16 02:23:46.985001 systemd[1]: Started cri-containerd-35f4363038220c99d67afbff89b241cc4bcd45bd0d3ec2d7264dbe84a54e8fd3.scope - libcontainer container 35f4363038220c99d67afbff89b241cc4bcd45bd0d3ec2d7264dbe84a54e8fd3. May 16 02:23:47.037792 containerd[1485]: time="2025-05-16T02:23:47.037724548Z" level=info msg="StartContainer for \"35f4363038220c99d67afbff89b241cc4bcd45bd0d3ec2d7264dbe84a54e8fd3\" returns successfully" May 16 02:23:47.946721 kubelet[1896]: E0516 02:23:47.946631 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:47.951312 kubelet[1896]: I0516 02:23:47.950866 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-m25wv" podStartSLOduration=2.396667895 podStartE2EDuration="7.950736772s" podCreationTimestamp="2025-05-16 02:23:40 +0000 UTC" firstStartedPulling="2025-05-16 02:23:41.315757888 +0000 UTC m=+258.316201532" lastFinishedPulling="2025-05-16 02:23:46.869826715 +0000 UTC m=+263.870270409" observedRunningTime="2025-05-16 02:23:47.948955669 +0000 UTC m=+264.949399414" watchObservedRunningTime="2025-05-16 02:23:47.950736772 +0000 UTC m=+264.951180466" May 16 02:23:48.948356 kubelet[1896]: E0516 02:23:48.947828 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:49.948827 kubelet[1896]: E0516 02:23:49.948639 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:50.949844 kubelet[1896]: E0516 02:23:50.949704 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:51.172444 containerd[1485]: time="2025-05-16T02:23:51.172356024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"0abe3ef606ff434eac496c676a6eac42bb34e339d39b952d2d794bde69bd3e28\" pid:3308 exited_at:{seconds:1747362231 nanos:171586511}" May 16 02:23:51.951216 kubelet[1896]: E0516 02:23:51.950991 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:52.952411 kubelet[1896]: E0516 02:23:52.952292 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:53.953430 kubelet[1896]: E0516 02:23:53.953309 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:54.953969 kubelet[1896]: E0516 02:23:54.953872 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:55.954356 kubelet[1896]: E0516 02:23:55.954193 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:56.339520 systemd[1]: Created slice kubepods-besteffort-pod20b0265a_8043_43ea_88ac_a2a73a5b0852.slice - libcontainer container kubepods-besteffort-pod20b0265a_8043_43ea_88ac_a2a73a5b0852.slice. May 16 02:23:56.471152 kubelet[1896]: I0516 02:23:56.471006 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98nvq\" (UniqueName: \"kubernetes.io/projected/20b0265a-8043-43ea-88ac-a2a73a5b0852-kube-api-access-98nvq\") pod \"nfs-server-provisioner-0\" (UID: \"20b0265a-8043-43ea-88ac-a2a73a5b0852\") " pod="default/nfs-server-provisioner-0" May 16 02:23:56.471800 kubelet[1896]: I0516 02:23:56.471334 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/20b0265a-8043-43ea-88ac-a2a73a5b0852-data\") pod \"nfs-server-provisioner-0\" (UID: \"20b0265a-8043-43ea-88ac-a2a73a5b0852\") " pod="default/nfs-server-provisioner-0" May 16 02:23:56.653362 containerd[1485]: time="2025-05-16T02:23:56.653064996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:20b0265a-8043-43ea-88ac-a2a73a5b0852,Namespace:default,Attempt:0,}" May 16 02:23:56.955406 kubelet[1896]: E0516 02:23:56.955291 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:56.983801 systemd-networkd[1388]: cali60e51b789ff: Link UP May 16 02:23:56.984069 systemd-networkd[1388]: cali60e51b789ff: Gained carrier May 16 02:23:57.039193 containerd[1485]: 2025-05-16 02:23:56.791 [INFO][3330] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 20b0265a-8043-43ea-88ac-a2a73a5b0852 2028 0 2025-05-16 02:23:56 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.70 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.70-k8s-nfs--server--provisioner--0-" May 16 02:23:57.039193 containerd[1485]: 2025-05-16 02:23:56.791 [INFO][3330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.70-k8s-nfs--server--provisioner--0-eth0" May 16 02:23:57.039193 containerd[1485]: 2025-05-16 02:23:56.874 [INFO][3341] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" HandleID="k8s-pod-network.4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Workload="172.24.4.70-k8s-nfs--server--provisioner--0-eth0" May 16 02:23:57.040388 containerd[1485]: 2025-05-16 02:23:56.874 [INFO][3341] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" HandleID="k8s-pod-network.4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Workload="172.24.4.70-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fab0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.70", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-16 02:23:56.874403344 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:23:57.040388 containerd[1485]: 2025-05-16 02:23:56.875 [INFO][3341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:23:57.040388 containerd[1485]: 2025-05-16 02:23:56.875 [INFO][3341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:23:57.040388 containerd[1485]: 2025-05-16 02:23:56.875 [INFO][3341] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:23:57.040388 containerd[1485]: 2025-05-16 02:23:56.894 [INFO][3341] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" host="172.24.4.70" May 16 02:23:57.040388 containerd[1485]: 2025-05-16 02:23:56.907 [INFO][3341] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:23:57.040388 containerd[1485]: 2025-05-16 02:23:56.917 [INFO][3341] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:23:57.040388 containerd[1485]: 2025-05-16 02:23:56.924 [INFO][3341] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:23:57.040388 containerd[1485]: 2025-05-16 02:23:56.932 [INFO][3341] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:23:57.040388 containerd[1485]: 2025-05-16 02:23:56.933 [INFO][3341] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" host="172.24.4.70" May 16 02:23:57.041435 containerd[1485]: 2025-05-16 02:23:56.937 [INFO][3341] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b May 16 02:23:57.041435 containerd[1485]: 2025-05-16 02:23:56.950 [INFO][3341] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" host="172.24.4.70" May 16 02:23:57.041435 containerd[1485]: 2025-05-16 02:23:56.965 [INFO][3341] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.131/26] block=192.168.76.128/26 handle="k8s-pod-network.4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" host="172.24.4.70" May 16 02:23:57.041435 containerd[1485]: 2025-05-16 02:23:56.965 [INFO][3341] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.131/26] handle="k8s-pod-network.4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" host="172.24.4.70" May 16 02:23:57.041435 containerd[1485]: 2025-05-16 02:23:56.965 [INFO][3341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:23:57.041435 containerd[1485]: 2025-05-16 02:23:56.965 [INFO][3341] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.131/26] IPv6=[] ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" HandleID="k8s-pod-network.4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Workload="172.24.4.70-k8s-nfs--server--provisioner--0-eth0" May 16 02:23:57.044163 containerd[1485]: 2025-05-16 02:23:56.970 [INFO][3330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.70-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"20b0265a-8043-43ea-88ac-a2a73a5b0852", ResourceVersion:"2028", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.76.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:23:57.044163 containerd[1485]: 2025-05-16 02:23:56.970 [INFO][3330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.131/32] ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.70-k8s-nfs--server--provisioner--0-eth0" May 16 02:23:57.044163 containerd[1485]: 2025-05-16 02:23:56.970 [INFO][3330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.70-k8s-nfs--server--provisioner--0-eth0" May 16 02:23:57.044163 containerd[1485]: 2025-05-16 02:23:56.979 [INFO][3330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.70-k8s-nfs--server--provisioner--0-eth0" May 16 02:23:57.044561 containerd[1485]: 2025-05-16 02:23:56.979 [INFO][3330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.70-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"20b0265a-8043-43ea-88ac-a2a73a5b0852", ResourceVersion:"2028", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.76.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"26:96:d7:d3:57:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:23:57.044561 containerd[1485]: 2025-05-16 02:23:57.022 [INFO][3330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.70-k8s-nfs--server--provisioner--0-eth0" May 16 02:23:57.099878 containerd[1485]: time="2025-05-16T02:23:57.099703726Z" level=info msg="connecting to shim 4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b" address="unix:///run/containerd/s/c3e1bfa0a3049a5e7dfd9db86768be4d863e370d2d4cba5a203b9c2e3ae92d64" namespace=k8s.io protocol=ttrpc version=3 May 16 02:23:57.145943 systemd[1]: Started cri-containerd-4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b.scope - libcontainer container 4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b. May 16 02:23:57.200387 containerd[1485]: time="2025-05-16T02:23:57.200192926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:20b0265a-8043-43ea-88ac-a2a73a5b0852,Namespace:default,Attempt:0,} returns sandbox id \"4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b\"" May 16 02:23:57.202817 containerd[1485]: time="2025-05-16T02:23:57.202588892Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 16 02:23:57.956155 kubelet[1896]: E0516 02:23:57.956009 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:58.505600 systemd-networkd[1388]: cali60e51b789ff: Gained IPv6LL May 16 02:23:58.966222 kubelet[1896]: E0516 02:23:58.957198 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:23:59.957985 kubelet[1896]: E0516 02:23:59.957802 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:00.403413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258284638.mount: Deactivated successfully. May 16 02:24:00.958982 kubelet[1896]: E0516 02:24:00.958933 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:01.960102 kubelet[1896]: E0516 02:24:01.960017 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:02.961588 kubelet[1896]: E0516 02:24:02.960663 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:03.027739 containerd[1485]: time="2025-05-16T02:24:03.027630592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:24:03.029119 containerd[1485]: time="2025-05-16T02:24:03.029024919Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" May 16 02:24:03.030830 containerd[1485]: time="2025-05-16T02:24:03.030802635Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:24:03.033698 containerd[1485]: time="2025-05-16T02:24:03.033657282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:24:03.035385 containerd[1485]: time="2025-05-16T02:24:03.035335872Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.832705692s" May 16 02:24:03.035674 containerd[1485]: time="2025-05-16T02:24:03.035501643Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 16 02:24:03.038645 containerd[1485]: time="2025-05-16T02:24:03.038606470Z" level=info msg="CreateContainer within sandbox \"4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 16 02:24:03.055715 containerd[1485]: time="2025-05-16T02:24:03.055517224Z" level=info msg="Container abedf20f5f42cd2df7efaa7ac48489d0f2568a37de348ba2f1922055ab42abb9: CDI devices from CRI Config.CDIDevices: []" May 16 02:24:03.078280 containerd[1485]: time="2025-05-16T02:24:03.078220869Z" level=info msg="CreateContainer within sandbox \"4f784954a039e1ff9af40a8e256c05947a38c53f69aee66e75c658ac771c564b\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"abedf20f5f42cd2df7efaa7ac48489d0f2568a37de348ba2f1922055ab42abb9\"" May 16 02:24:03.078997 containerd[1485]: time="2025-05-16T02:24:03.078967880Z" level=info msg="StartContainer for \"abedf20f5f42cd2df7efaa7ac48489d0f2568a37de348ba2f1922055ab42abb9\"" May 16 02:24:03.080544 containerd[1485]: time="2025-05-16T02:24:03.080246791Z" level=info msg="connecting to shim abedf20f5f42cd2df7efaa7ac48489d0f2568a37de348ba2f1922055ab42abb9" address="unix:///run/containerd/s/c3e1bfa0a3049a5e7dfd9db86768be4d863e370d2d4cba5a203b9c2e3ae92d64" protocol=ttrpc version=3 May 16 02:24:03.111949 systemd[1]: Started cri-containerd-abedf20f5f42cd2df7efaa7ac48489d0f2568a37de348ba2f1922055ab42abb9.scope - libcontainer container abedf20f5f42cd2df7efaa7ac48489d0f2568a37de348ba2f1922055ab42abb9. May 16 02:24:03.151108 containerd[1485]: time="2025-05-16T02:24:03.151022447Z" level=info msg="StartContainer for \"abedf20f5f42cd2df7efaa7ac48489d0f2568a37de348ba2f1922055ab42abb9\" returns successfully" May 16 02:24:03.712197 kubelet[1896]: E0516 02:24:03.712057 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:03.961701 kubelet[1896]: E0516 02:24:03.961577 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:04.100671 kubelet[1896]: I0516 02:24:04.099239 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.264607879 podStartE2EDuration="8.09913508s" podCreationTimestamp="2025-05-16 02:23:56 +0000 UTC" firstStartedPulling="2025-05-16 02:23:57.202149217 +0000 UTC m=+274.202592871" lastFinishedPulling="2025-05-16 02:24:03.036676428 +0000 UTC m=+280.037120072" observedRunningTime="2025-05-16 02:24:04.097202333 +0000 UTC m=+281.097646077" watchObservedRunningTime="2025-05-16 02:24:04.09913508 +0000 UTC m=+281.099578774" May 16 02:24:04.961978 kubelet[1896]: E0516 02:24:04.961850 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:05.963030 kubelet[1896]: E0516 02:24:05.962912 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:06.964125 kubelet[1896]: E0516 02:24:06.963997 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:07.964736 kubelet[1896]: E0516 02:24:07.964607 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:08.966100 kubelet[1896]: E0516 02:24:08.965965 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:09.966801 kubelet[1896]: E0516 02:24:09.966681 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:10.967751 kubelet[1896]: E0516 02:24:10.967662 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:11.970851 kubelet[1896]: E0516 02:24:11.968870 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:12.969619 kubelet[1896]: E0516 02:24:12.969526 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:13.970118 kubelet[1896]: E0516 02:24:13.970036 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:14.971059 kubelet[1896]: E0516 02:24:14.970948 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:15.971497 kubelet[1896]: E0516 02:24:15.971404 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:16.972594 kubelet[1896]: E0516 02:24:16.972435 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:17.972908 kubelet[1896]: E0516 02:24:17.972812 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:18.974153 kubelet[1896]: E0516 02:24:18.974051 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:19.975000 kubelet[1896]: E0516 02:24:19.974902 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:20.975500 kubelet[1896]: E0516 02:24:20.975343 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:21.167974 containerd[1485]: time="2025-05-16T02:24:21.167887776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"8e80631e01f0e36b8722b27af05b3b3597cca7739c55dc7159a72a8885b3849f\" pid:3521 exited_at:{seconds:1747362261 nanos:164883809}" May 16 02:24:21.975967 kubelet[1896]: E0516 02:24:21.975871 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:22.976184 kubelet[1896]: E0516 02:24:22.976074 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:23.711704 kubelet[1896]: E0516 02:24:23.711614 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:23.977047 kubelet[1896]: E0516 02:24:23.976844 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:24.977442 kubelet[1896]: E0516 02:24:24.977355 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:25.978467 kubelet[1896]: E0516 02:24:25.978389 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:26.979583 kubelet[1896]: E0516 02:24:26.979469 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:27.054621 containerd[1485]: time="2025-05-16T02:24:27.054352235Z" level=warning msg="container event discarded" container=5fc37ea1a01c549841114e7d9413dc348f80318fd6099caee3f2d6615bd099e2 type=CONTAINER_CREATED_EVENT May 16 02:24:27.054621 containerd[1485]: time="2025-05-16T02:24:27.054549896Z" level=warning msg="container event discarded" container=5fc37ea1a01c549841114e7d9413dc348f80318fd6099caee3f2d6615bd099e2 type=CONTAINER_STARTED_EVENT May 16 02:24:27.100104 containerd[1485]: time="2025-05-16T02:24:27.099949906Z" level=warning msg="container event discarded" container=c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9 type=CONTAINER_CREATED_EVENT May 16 02:24:27.100104 containerd[1485]: time="2025-05-16T02:24:27.100038934Z" level=warning msg="container event discarded" container=c3b9d7490ab7001c2f9c71fc58ce6c92b167d07bc6e839b2464db50613bddcf9 type=CONTAINER_STARTED_EVENT May 16 02:24:27.980354 kubelet[1896]: E0516 02:24:27.980252 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:28.257951 systemd[1]: Created slice kubepods-besteffort-pod74d02e1e_41c4_433a_82e2_2018d65b94a4.slice - libcontainer container kubepods-besteffort-pod74d02e1e_41c4_433a_82e2_2018d65b94a4.slice. May 16 02:24:28.506975 kubelet[1896]: I0516 02:24:28.506861 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs6vk\" (UniqueName: \"kubernetes.io/projected/74d02e1e-41c4-433a-82e2-2018d65b94a4-kube-api-access-gs6vk\") pod \"test-pod-1\" (UID: \"74d02e1e-41c4-433a-82e2-2018d65b94a4\") " pod="default/test-pod-1" May 16 02:24:28.507161 kubelet[1896]: I0516 02:24:28.507030 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-27fa66c1-2b2f-4d7d-a37d-10ba97552db5\" (UniqueName: \"kubernetes.io/nfs/74d02e1e-41c4-433a-82e2-2018d65b94a4-pvc-27fa66c1-2b2f-4d7d-a37d-10ba97552db5\") pod \"test-pod-1\" (UID: \"74d02e1e-41c4-433a-82e2-2018d65b94a4\") " pod="default/test-pod-1" May 16 02:24:28.794882 kernel: FS-Cache: Loaded May 16 02:24:28.961232 kernel: RPC: Registered named UNIX socket transport module. May 16 02:24:28.961491 kernel: RPC: Registered udp transport module. May 16 02:24:28.961713 kernel: RPC: Registered tcp transport module. May 16 02:24:28.963475 kernel: RPC: Registered tcp-with-tls transport module. May 16 02:24:28.963660 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 16 02:24:28.981550 kubelet[1896]: E0516 02:24:28.981451 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:29.260891 containerd[1485]: time="2025-05-16T02:24:29.260097129Z" level=warning msg="container event discarded" container=c7a8476f16a39bea42b92b8a789b9434ca125c3a070fd6d1a943f07269e66a60 type=CONTAINER_CREATED_EVENT May 16 02:24:29.272882 kernel: NFS: Registering the id_resolver key type May 16 02:24:29.272997 kernel: Key type id_resolver registered May 16 02:24:29.274803 kernel: Key type id_legacy registered May 16 02:24:29.319224 nfsidmap[3553]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' May 16 02:24:29.330686 nfsidmap[3555]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' May 16 02:24:29.361646 containerd[1485]: time="2025-05-16T02:24:29.361515214Z" level=warning msg="container event discarded" container=c7a8476f16a39bea42b92b8a789b9434ca125c3a070fd6d1a943f07269e66a60 type=CONTAINER_STARTED_EVENT May 16 02:24:29.468710 containerd[1485]: time="2025-05-16T02:24:29.467571654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:74d02e1e-41c4-433a-82e2-2018d65b94a4,Namespace:default,Attempt:0,}" May 16 02:24:29.770598 systemd-networkd[1388]: cali5ec59c6bf6e: Link UP May 16 02:24:29.775431 systemd-networkd[1388]: cali5ec59c6bf6e: Gained carrier May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.587 [INFO][3557] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-test--pod--1-eth0 default 74d02e1e-41c4-433a-82e2-2018d65b94a4 2122 0 2025-05-16 02:23:58 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.70 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.70-k8s-test--pod--1-" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.587 [INFO][3557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.70-k8s-test--pod--1-eth0" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.632 [INFO][3571] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" HandleID="k8s-pod-network.f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Workload="172.24.4.70-k8s-test--pod--1-eth0" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.632 [INFO][3571] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" HandleID="k8s-pod-network.f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Workload="172.24.4.70-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.70", "pod":"test-pod-1", "timestamp":"2025-05-16 02:24:29.632164594 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.632 [INFO][3571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.632 [INFO][3571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.632 [INFO][3571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.654 [INFO][3571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" host="172.24.4.70" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.672 [INFO][3571] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.687 [INFO][3571] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.693 [INFO][3571] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.710 [INFO][3571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.710 [INFO][3571] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" host="172.24.4.70" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.716 [INFO][3571] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.733 [INFO][3571] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" host="172.24.4.70" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.756 [INFO][3571] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.132/26] block=192.168.76.128/26 handle="k8s-pod-network.f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" host="172.24.4.70" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.756 [INFO][3571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.132/26] handle="k8s-pod-network.f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" host="172.24.4.70" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.756 [INFO][3571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.756 [INFO][3571] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.132/26] IPv6=[] ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" HandleID="k8s-pod-network.f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Workload="172.24.4.70-k8s-test--pod--1-eth0" May 16 02:24:29.808155 containerd[1485]: 2025-05-16 02:24:29.759 [INFO][3557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.70-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"74d02e1e-41c4-433a-82e2-2018d65b94a4", ResourceVersion:"2122", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 23, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:29.811506 containerd[1485]: 2025-05-16 02:24:29.760 [INFO][3557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.132/32] ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.70-k8s-test--pod--1-eth0" May 16 02:24:29.811506 containerd[1485]: 2025-05-16 02:24:29.760 [INFO][3557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.70-k8s-test--pod--1-eth0" May 16 02:24:29.811506 containerd[1485]: 2025-05-16 02:24:29.779 [INFO][3557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.70-k8s-test--pod--1-eth0" May 16 02:24:29.811506 containerd[1485]: 2025-05-16 02:24:29.781 [INFO][3557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.70-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"74d02e1e-41c4-433a-82e2-2018d65b94a4", ResourceVersion:"2122", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 23, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"5e:ec:d9:75:fc:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:29.811506 containerd[1485]: 2025-05-16 02:24:29.802 [INFO][3557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.70-k8s-test--pod--1-eth0" May 16 02:24:29.868930 containerd[1485]: time="2025-05-16T02:24:29.868173743Z" level=info msg="connecting to shim f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c" address="unix:///run/containerd/s/0ad70886d523322310f64029f8002f4b82840d6a84b246a4dd6e7f3032b23488" namespace=k8s.io protocol=ttrpc version=3 May 16 02:24:29.899961 systemd[1]: Started cri-containerd-f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c.scope - libcontainer container f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c. May 16 02:24:29.950498 containerd[1485]: time="2025-05-16T02:24:29.950412850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:74d02e1e-41c4-433a-82e2-2018d65b94a4,Namespace:default,Attempt:0,} returns sandbox id \"f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c\"" May 16 02:24:29.953160 containerd[1485]: time="2025-05-16T02:24:29.952812923Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 02:24:29.982004 kubelet[1896]: E0516 02:24:29.981954 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:30.488373 containerd[1485]: time="2025-05-16T02:24:30.488185830Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:24:30.491072 containerd[1485]: time="2025-05-16T02:24:30.490953183Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 16 02:24:30.501348 containerd[1485]: time="2025-05-16T02:24:30.501236930Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 548.37278ms" May 16 02:24:30.501509 containerd[1485]: time="2025-05-16T02:24:30.501355012Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 16 02:24:30.506667 containerd[1485]: time="2025-05-16T02:24:30.506574716Z" level=info msg="CreateContainer within sandbox \"f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 16 02:24:30.530181 containerd[1485]: time="2025-05-16T02:24:30.527157588Z" level=info msg="Container c273551032050a49533a3feb8dba7747b9aee1bfdfbcee3e0ede0e2bb7843e36: CDI devices from CRI Config.CDIDevices: []" May 16 02:24:30.559450 containerd[1485]: time="2025-05-16T02:24:30.559346799Z" level=info msg="CreateContainer within sandbox \"f7cbce2e5df6e51010fc612570c861a737d24a69ca25348944b2348fb33abf7c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c273551032050a49533a3feb8dba7747b9aee1bfdfbcee3e0ede0e2bb7843e36\"" May 16 02:24:30.560977 containerd[1485]: time="2025-05-16T02:24:30.560884245Z" level=info msg="StartContainer for \"c273551032050a49533a3feb8dba7747b9aee1bfdfbcee3e0ede0e2bb7843e36\"" May 16 02:24:30.563244 containerd[1485]: time="2025-05-16T02:24:30.563192576Z" level=info msg="connecting to shim c273551032050a49533a3feb8dba7747b9aee1bfdfbcee3e0ede0e2bb7843e36" address="unix:///run/containerd/s/0ad70886d523322310f64029f8002f4b82840d6a84b246a4dd6e7f3032b23488" protocol=ttrpc version=3 May 16 02:24:30.619247 systemd[1]: Started cri-containerd-c273551032050a49533a3feb8dba7747b9aee1bfdfbcee3e0ede0e2bb7843e36.scope - libcontainer container c273551032050a49533a3feb8dba7747b9aee1bfdfbcee3e0ede0e2bb7843e36. May 16 02:24:30.662713 containerd[1485]: time="2025-05-16T02:24:30.662672354Z" level=info msg="StartContainer for \"c273551032050a49533a3feb8dba7747b9aee1bfdfbcee3e0ede0e2bb7843e36\" returns successfully" May 16 02:24:30.889052 systemd-networkd[1388]: cali5ec59c6bf6e: Gained IPv6LL May 16 02:24:30.982979 kubelet[1896]: E0516 02:24:30.982892 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:31.263005 containerd[1485]: time="2025-05-16T02:24:31.262817932Z" level=warning msg="container event discarded" container=d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72 type=CONTAINER_CREATED_EVENT May 16 02:24:31.356599 containerd[1485]: time="2025-05-16T02:24:31.356455637Z" level=warning msg="container event discarded" container=d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72 type=CONTAINER_STARTED_EVENT May 16 02:24:31.984562 kubelet[1896]: E0516 02:24:31.984464 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:32.041523 containerd[1485]: time="2025-05-16T02:24:32.041411265Z" level=warning msg="container event discarded" container=d9fe8eb65bfa894a7b6215ae7d374554c75957399ff218bda1f68d0190724d72 type=CONTAINER_STOPPED_EVENT May 16 02:24:32.985591 kubelet[1896]: E0516 02:24:32.985451 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:33.986164 kubelet[1896]: E0516 02:24:33.986045 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:34.986558 kubelet[1896]: E0516 02:24:34.986504 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:35.987966 kubelet[1896]: E0516 02:24:35.987820 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:36.988858 kubelet[1896]: E0516 02:24:36.988711 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:37.830829 containerd[1485]: time="2025-05-16T02:24:37.830631397Z" level=warning msg="container event discarded" container=7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9 type=CONTAINER_CREATED_EVENT May 16 02:24:37.981870 containerd[1485]: time="2025-05-16T02:24:37.981690266Z" level=warning msg="container event discarded" container=7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9 type=CONTAINER_STARTED_EVENT May 16 02:24:37.989621 kubelet[1896]: E0516 02:24:37.989508 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:38.990306 kubelet[1896]: E0516 02:24:38.990226 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:39.991398 kubelet[1896]: E0516 02:24:39.991315 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:40.717145 containerd[1485]: time="2025-05-16T02:24:40.717000217Z" level=warning msg="container event discarded" container=7c1a38b538dc1871ee18d18fb3554870440afbb7c3157c6ee69dab04fe62a1c9 type=CONTAINER_STOPPED_EVENT May 16 02:24:40.992390 kubelet[1896]: E0516 02:24:40.992130 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:41.992871 kubelet[1896]: E0516 02:24:41.992744 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:42.994043 kubelet[1896]: E0516 02:24:42.993914 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:43.712282 kubelet[1896]: E0516 02:24:43.712128 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:43.994583 kubelet[1896]: E0516 02:24:43.994325 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:44.999303 kubelet[1896]: E0516 02:24:44.999216 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:45.999740 kubelet[1896]: E0516 02:24:45.999656 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:47.000395 kubelet[1896]: E0516 02:24:47.000293 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:48.000977 kubelet[1896]: E0516 02:24:48.000862 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:49.001940 kubelet[1896]: E0516 02:24:49.001705 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:49.738575 containerd[1485]: time="2025-05-16T02:24:49.738400406Z" level=warning msg="container event discarded" container=5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b type=CONTAINER_CREATED_EVENT May 16 02:24:49.847113 containerd[1485]: time="2025-05-16T02:24:49.846933775Z" level=warning msg="container event discarded" container=5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b type=CONTAINER_STARTED_EVENT May 16 02:24:50.002157 kubelet[1896]: E0516 02:24:50.001894 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:51.003048 kubelet[1896]: E0516 02:24:51.002654 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:51.191916 containerd[1485]: time="2025-05-16T02:24:51.191746668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"2a812fa0427f80b6a5344251b8d8176eddb2a9a0c8a0aed78e51225e2caea5c4\" pid:3713 exited_at:{seconds:1747362291 nanos:190677942}" May 16 02:24:51.585526 kubelet[1896]: I0516 02:24:51.585202 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=53.033975661 podStartE2EDuration="53.585146763s" podCreationTimestamp="2025-05-16 02:23:58 +0000 UTC" firstStartedPulling="2025-05-16 02:24:29.951852721 +0000 UTC m=+306.952296365" lastFinishedPulling="2025-05-16 02:24:30.503023742 +0000 UTC m=+307.503467467" observedRunningTime="2025-05-16 02:24:31.20866639 +0000 UTC m=+308.209110114" watchObservedRunningTime="2025-05-16 02:24:51.585146763 +0000 UTC m=+328.585590457" May 16 02:24:51.622929 systemd[1]: Created slice kubepods-burstable-pod6c3f6bd4_006c_4335_81ac_1e83b24210cf.slice - libcontainer container kubepods-burstable-pod6c3f6bd4_006c_4335_81ac_1e83b24210cf.slice. May 16 02:24:51.644038 systemd[1]: Created slice kubepods-burstable-pod9c1b426b_9fc0_4446_9602_f219653933f4.slice - libcontainer container kubepods-burstable-pod9c1b426b_9fc0_4446_9602_f219653933f4.slice. May 16 02:24:51.692188 kubelet[1896]: I0516 02:24:51.691031 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxw7\" (UniqueName: \"kubernetes.io/projected/6c3f6bd4-006c-4335-81ac-1e83b24210cf-kube-api-access-cwxw7\") pod \"coredns-668d6bf9bc-r5njm\" (UID: \"6c3f6bd4-006c-4335-81ac-1e83b24210cf\") " pod="kube-system/coredns-668d6bf9bc-r5njm" May 16 02:24:51.692188 kubelet[1896]: I0516 02:24:51.691185 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c3f6bd4-006c-4335-81ac-1e83b24210cf-config-volume\") pod \"coredns-668d6bf9bc-r5njm\" (UID: \"6c3f6bd4-006c-4335-81ac-1e83b24210cf\") " pod="kube-system/coredns-668d6bf9bc-r5njm" May 16 02:24:51.706916 kubelet[1896]: W0516 02:24:51.706757 1896 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:172.24.4.70" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node '172.24.4.70' and this object May 16 02:24:51.707317 kubelet[1896]: E0516 02:24:51.706998 1896 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:172.24.4.70\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172.24.4.70' and this object" logger="UnhandledError" May 16 02:24:51.707317 kubelet[1896]: W0516 02:24:51.707133 1896 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:172.24.4.70" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node '172.24.4.70' and this object May 16 02:24:51.707317 kubelet[1896]: E0516 02:24:51.707171 1896 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:172.24.4.70\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172.24.4.70' and this object" logger="UnhandledError" May 16 02:24:51.708423 kubelet[1896]: I0516 02:24:51.706752 1896 status_manager.go:890] "Failed to get status for pod" podUID="0fb2f30c-5b53-402c-a680-17efdad682e5" pod="calico-system/calico-kube-controllers-7bc47596c9-jtvrh" err="pods \"calico-kube-controllers-7bc47596c9-jtvrh\" is forbidden: User \"system:node:172.24.4.70\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172.24.4.70' and this object" May 16 02:24:51.715511 systemd[1]: Created slice kubepods-besteffort-pod0fb2f30c_5b53_402c_a680_17efdad682e5.slice - libcontainer container kubepods-besteffort-pod0fb2f30c_5b53_402c_a680_17efdad682e5.slice. May 16 02:24:51.741857 systemd[1]: Created slice kubepods-besteffort-pod0c4f09d7_c50b_4399_b41e_885e1dd46474.slice - libcontainer container kubepods-besteffort-pod0c4f09d7_c50b_4399_b41e_885e1dd46474.slice. May 16 02:24:51.756888 systemd[1]: Created slice kubepods-besteffort-pod97f42233_859e_49fa_8153_464a2bd680ed.slice - libcontainer container kubepods-besteffort-pod97f42233_859e_49fa_8153_464a2bd680ed.slice. May 16 02:24:51.764700 systemd[1]: Created slice kubepods-besteffort-podf2ca8e90_56b3_45b2_ba49_751853591242.slice - libcontainer container kubepods-besteffort-podf2ca8e90_56b3_45b2_ba49_751853591242.slice. May 16 02:24:51.775626 systemd[1]: Created slice kubepods-besteffort-pod5acca69e_71f4_4b57_94b7_9788df5c994a.slice - libcontainer container kubepods-besteffort-pod5acca69e_71f4_4b57_94b7_9788df5c994a.slice. May 16 02:24:51.792119 kubelet[1896]: I0516 02:24:51.791743 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c1b426b-9fc0-4446-9602-f219653933f4-config-volume\") pod \"coredns-668d6bf9bc-9fsxb\" (UID: \"9c1b426b-9fc0-4446-9602-f219653933f4\") " pod="kube-system/coredns-668d6bf9bc-9fsxb" May 16 02:24:51.792119 kubelet[1896]: I0516 02:24:51.791821 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqst\" (UniqueName: \"kubernetes.io/projected/0fb2f30c-5b53-402c-a680-17efdad682e5-kube-api-access-kxqst\") pod \"calico-kube-controllers-7bc47596c9-jtvrh\" (UID: \"0fb2f30c-5b53-402c-a680-17efdad682e5\") " pod="calico-system/calico-kube-controllers-7bc47596c9-jtvrh" May 16 02:24:51.792119 kubelet[1896]: I0516 02:24:51.791850 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm48n\" (UniqueName: \"kubernetes.io/projected/9c1b426b-9fc0-4446-9602-f219653933f4-kube-api-access-zm48n\") pod \"coredns-668d6bf9bc-9fsxb\" (UID: \"9c1b426b-9fc0-4446-9602-f219653933f4\") " pod="kube-system/coredns-668d6bf9bc-9fsxb" May 16 02:24:51.792119 kubelet[1896]: I0516 02:24:51.791871 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fb2f30c-5b53-402c-a680-17efdad682e5-tigera-ca-bundle\") pod \"calico-kube-controllers-7bc47596c9-jtvrh\" (UID: \"0fb2f30c-5b53-402c-a680-17efdad682e5\") " pod="calico-system/calico-kube-controllers-7bc47596c9-jtvrh" May 16 02:24:51.792119 kubelet[1896]: I0516 02:24:51.791941 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c4f09d7-c50b-4399-b41e-885e1dd46474-whisker-backend-key-pair\") pod \"whisker-67b68d469c-fjqlb\" (UID: \"0c4f09d7-c50b-4399-b41e-885e1dd46474\") " pod="calico-system/whisker-67b68d469c-fjqlb" May 16 02:24:51.792678 kubelet[1896]: I0516 02:24:51.792002 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4f09d7-c50b-4399-b41e-885e1dd46474-whisker-ca-bundle\") pod \"whisker-67b68d469c-fjqlb\" (UID: \"0c4f09d7-c50b-4399-b41e-885e1dd46474\") " pod="calico-system/whisker-67b68d469c-fjqlb" May 16 02:24:51.792678 kubelet[1896]: I0516 02:24:51.792105 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bdfr\" (UniqueName: \"kubernetes.io/projected/0c4f09d7-c50b-4399-b41e-885e1dd46474-kube-api-access-6bdfr\") pod \"whisker-67b68d469c-fjqlb\" (UID: \"0c4f09d7-c50b-4399-b41e-885e1dd46474\") " pod="calico-system/whisker-67b68d469c-fjqlb" May 16 02:24:51.904169 kubelet[1896]: I0516 02:24:51.903912 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f2ca8e90-56b3-45b2-ba49-751853591242-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-8ccw8\" (UID: \"f2ca8e90-56b3-45b2-ba49-751853591242\") " pod="calico-system/goldmane-78d55f7ddc-8ccw8" May 16 02:24:51.904169 kubelet[1896]: I0516 02:24:51.904054 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/97f42233-859e-49fa-8153-464a2bd680ed-calico-apiserver-certs\") pod \"calico-apiserver-7466bd8df7-wrdzb\" (UID: \"97f42233-859e-49fa-8153-464a2bd680ed\") " pod="calico-apiserver/calico-apiserver-7466bd8df7-wrdzb" May 16 02:24:51.904169 kubelet[1896]: I0516 02:24:51.904125 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cztmv\" (UniqueName: \"kubernetes.io/projected/f2ca8e90-56b3-45b2-ba49-751853591242-kube-api-access-cztmv\") pod \"goldmane-78d55f7ddc-8ccw8\" (UID: \"f2ca8e90-56b3-45b2-ba49-751853591242\") " pod="calico-system/goldmane-78d55f7ddc-8ccw8" May 16 02:24:51.904592 kubelet[1896]: I0516 02:24:51.904330 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2ca8e90-56b3-45b2-ba49-751853591242-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-8ccw8\" (UID: \"f2ca8e90-56b3-45b2-ba49-751853591242\") " pod="calico-system/goldmane-78d55f7ddc-8ccw8" May 16 02:24:51.904592 kubelet[1896]: I0516 02:24:51.904423 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9gkv\" (UniqueName: \"kubernetes.io/projected/5acca69e-71f4-4b57-94b7-9788df5c994a-kube-api-access-w9gkv\") pod \"calico-apiserver-7466bd8df7-ww4bk\" (UID: \"5acca69e-71f4-4b57-94b7-9788df5c994a\") " pod="calico-apiserver/calico-apiserver-7466bd8df7-ww4bk" May 16 02:24:51.904592 kubelet[1896]: I0516 02:24:51.904555 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ca8e90-56b3-45b2-ba49-751853591242-config\") pod \"goldmane-78d55f7ddc-8ccw8\" (UID: \"f2ca8e90-56b3-45b2-ba49-751853591242\") " pod="calico-system/goldmane-78d55f7ddc-8ccw8" May 16 02:24:51.904845 kubelet[1896]: I0516 02:24:51.904736 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mxdj\" (UniqueName: \"kubernetes.io/projected/97f42233-859e-49fa-8153-464a2bd680ed-kube-api-access-8mxdj\") pod \"calico-apiserver-7466bd8df7-wrdzb\" (UID: \"97f42233-859e-49fa-8153-464a2bd680ed\") " pod="calico-apiserver/calico-apiserver-7466bd8df7-wrdzb" May 16 02:24:51.904973 kubelet[1896]: I0516 02:24:51.904838 1896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5acca69e-71f4-4b57-94b7-9788df5c994a-calico-apiserver-certs\") pod \"calico-apiserver-7466bd8df7-ww4bk\" (UID: \"5acca69e-71f4-4b57-94b7-9788df5c994a\") " pod="calico-apiserver/calico-apiserver-7466bd8df7-ww4bk" May 16 02:24:51.966391 containerd[1485]: time="2025-05-16T02:24:51.965368390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r5njm,Uid:6c3f6bd4-006c-4335-81ac-1e83b24210cf,Namespace:kube-system,Attempt:0,}" May 16 02:24:52.003943 kubelet[1896]: E0516 02:24:52.003888 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:52.029485 containerd[1485]: time="2025-05-16T02:24:52.029291684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bc47596c9-jtvrh,Uid:0fb2f30c-5b53-402c-a680-17efdad682e5,Namespace:calico-system,Attempt:0,}" May 16 02:24:52.063232 containerd[1485]: time="2025-05-16T02:24:52.063171055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466bd8df7-wrdzb,Uid:97f42233-859e-49fa-8153-464a2bd680ed,Namespace:calico-apiserver,Attempt:0,}" May 16 02:24:52.071380 containerd[1485]: time="2025-05-16T02:24:52.071127344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-8ccw8,Uid:f2ca8e90-56b3-45b2-ba49-751853591242,Namespace:calico-system,Attempt:0,}" May 16 02:24:52.083028 containerd[1485]: time="2025-05-16T02:24:52.082969223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466bd8df7-ww4bk,Uid:5acca69e-71f4-4b57-94b7-9788df5c994a,Namespace:calico-apiserver,Attempt:0,}" May 16 02:24:52.250426 containerd[1485]: time="2025-05-16T02:24:52.250367244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fsxb,Uid:9c1b426b-9fc0-4446-9602-f219653933f4,Namespace:kube-system,Attempt:0,}" May 16 02:24:52.307851 systemd-networkd[1388]: cali0089629ed35: Link UP May 16 02:24:52.308139 systemd-networkd[1388]: cali0089629ed35: Gained carrier May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.061 [INFO][3734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0 coredns-668d6bf9bc- kube-system 6c3f6bd4-006c-4335-81ac-1e83b24210cf 2219 0 2025-05-16 02:24:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172.24.4.70 coredns-668d6bf9bc-r5njm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0089629ed35 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-r5njm" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.061 [INFO][3734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-r5njm" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.178 [INFO][3764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" HandleID="k8s-pod-network.5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Workload="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.180 [INFO][3764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" HandleID="k8s-pod-network.5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Workload="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9180), Attrs:map[string]string{"namespace":"kube-system", "node":"172.24.4.70", "pod":"coredns-668d6bf9bc-r5njm", "timestamp":"2025-05-16 02:24:52.17881242 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.181 [INFO][3764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.181 [INFO][3764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.181 [INFO][3764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.199 [INFO][3764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" host="172.24.4.70" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.233 [INFO][3764] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.246 [INFO][3764] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.251 [INFO][3764] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.258 [INFO][3764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.258 [INFO][3764] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" host="172.24.4.70" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.262 [INFO][3764] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6 May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.273 [INFO][3764] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" host="172.24.4.70" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.291 [INFO][3764] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.133/26] block=192.168.76.128/26 handle="k8s-pod-network.5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" host="172.24.4.70" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.291 [INFO][3764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.133/26] handle="k8s-pod-network.5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" host="172.24.4.70" May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.291 [INFO][3764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:24:52.337123 containerd[1485]: 2025-05-16 02:24:52.291 [INFO][3764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.133/26] IPv6=[] ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" HandleID="k8s-pod-network.5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Workload="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0" May 16 02:24:52.339386 containerd[1485]: 2025-05-16 02:24:52.297 [INFO][3734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-r5njm" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6c3f6bd4-006c-4335-81ac-1e83b24210cf", ResourceVersion:"2219", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"coredns-668d6bf9bc-r5njm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0089629ed35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:52.339386 containerd[1485]: 2025-05-16 02:24:52.297 [INFO][3734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.133/32] ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-r5njm" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0" May 16 02:24:52.339386 containerd[1485]: 2025-05-16 02:24:52.297 [INFO][3734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0089629ed35 ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-r5njm" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0" May 16 02:24:52.339386 containerd[1485]: 2025-05-16 02:24:52.306 [INFO][3734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-r5njm" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0" May 16 02:24:52.339386 containerd[1485]: 2025-05-16 02:24:52.317 [INFO][3734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-r5njm" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6c3f6bd4-006c-4335-81ac-1e83b24210cf", ResourceVersion:"2219", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6", Pod:"coredns-668d6bf9bc-r5njm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0089629ed35", MAC:"b6:65:4a:f0:33:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:52.339386 containerd[1485]: 2025-05-16 02:24:52.335 [INFO][3734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-r5njm" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--r5njm-eth0" May 16 02:24:52.393536 containerd[1485]: time="2025-05-16T02:24:52.393423701Z" level=info msg="connecting to shim 5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6" address="unix:///run/containerd/s/8521be5a38196790a5f1d211795dcf8c8df4c07a2bb622ddb6226bb7ea06096e" namespace=k8s.io protocol=ttrpc version=3 May 16 02:24:52.426974 systemd[1]: Started cri-containerd-5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6.scope - libcontainer container 5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6. May 16 02:24:52.541827 containerd[1485]: time="2025-05-16T02:24:52.541658043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r5njm,Uid:6c3f6bd4-006c-4335-81ac-1e83b24210cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6\"" May 16 02:24:52.547562 containerd[1485]: time="2025-05-16T02:24:52.547335376Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 02:24:52.563556 systemd-networkd[1388]: cali5505883f5ce: Link UP May 16 02:24:52.565710 systemd-networkd[1388]: cali5505883f5ce: Gained carrier May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.115 [INFO][3750] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0 calico-kube-controllers-7bc47596c9- calico-system 0fb2f30c-5b53-402c-a680-17efdad682e5 2256 0 2025-05-16 02:24:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bc47596c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172.24.4.70 calico-kube-controllers-7bc47596c9-jtvrh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5505883f5ce [] [] }} ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Namespace="calico-system" Pod="calico-kube-controllers-7bc47596c9-jtvrh" WorkloadEndpoint="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.116 [INFO][3750] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Namespace="calico-system" Pod="calico-kube-controllers-7bc47596c9-jtvrh" WorkloadEndpoint="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.188 [INFO][3803] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" HandleID="k8s-pod-network.ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Workload="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.189 [INFO][3803] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" HandleID="k8s-pod-network.ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Workload="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.70", "pod":"calico-kube-controllers-7bc47596c9-jtvrh", "timestamp":"2025-05-16 02:24:52.188428163 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.189 [INFO][3803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.293 [INFO][3803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.293 [INFO][3803] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.324 [INFO][3803] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" host="172.24.4.70" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.341 [INFO][3803] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.352 [INFO][3803] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.357 [INFO][3803] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.362 [INFO][3803] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.363 [INFO][3803] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" host="172.24.4.70" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.367 [INFO][3803] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252 May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.377 [INFO][3803] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" host="172.24.4.70" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.549 [INFO][3803] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.134/26] block=192.168.76.128/26 handle="k8s-pod-network.ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" host="172.24.4.70" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.550 [INFO][3803] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.134/26] handle="k8s-pod-network.ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" host="172.24.4.70" May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.550 [INFO][3803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:24:52.589013 containerd[1485]: 2025-05-16 02:24:52.551 [INFO][3803] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.134/26] IPv6=[] ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" HandleID="k8s-pod-network.ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Workload="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0" May 16 02:24:52.590057 containerd[1485]: 2025-05-16 02:24:52.554 [INFO][3750] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Namespace="calico-system" Pod="calico-kube-controllers-7bc47596c9-jtvrh" WorkloadEndpoint="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0", GenerateName:"calico-kube-controllers-7bc47596c9-", Namespace:"calico-system", SelfLink:"", UID:"0fb2f30c-5b53-402c-a680-17efdad682e5", ResourceVersion:"2256", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bc47596c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"calico-kube-controllers-7bc47596c9-jtvrh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5505883f5ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:52.590057 containerd[1485]: 2025-05-16 02:24:52.555 [INFO][3750] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.134/32] ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Namespace="calico-system" Pod="calico-kube-controllers-7bc47596c9-jtvrh" WorkloadEndpoint="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0" May 16 02:24:52.590057 containerd[1485]: 2025-05-16 02:24:52.555 [INFO][3750] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5505883f5ce ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Namespace="calico-system" Pod="calico-kube-controllers-7bc47596c9-jtvrh" WorkloadEndpoint="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0" May 16 02:24:52.590057 containerd[1485]: 2025-05-16 02:24:52.567 [INFO][3750] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Namespace="calico-system" Pod="calico-kube-controllers-7bc47596c9-jtvrh" WorkloadEndpoint="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0" May 16 02:24:52.590057 containerd[1485]: 2025-05-16 02:24:52.570 [INFO][3750] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Namespace="calico-system" Pod="calico-kube-controllers-7bc47596c9-jtvrh" WorkloadEndpoint="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0", GenerateName:"calico-kube-controllers-7bc47596c9-", Namespace:"calico-system", SelfLink:"", UID:"0fb2f30c-5b53-402c-a680-17efdad682e5", ResourceVersion:"2256", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bc47596c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252", Pod:"calico-kube-controllers-7bc47596c9-jtvrh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5505883f5ce", MAC:"62:dd:33:42:12:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:52.590057 containerd[1485]: 2025-05-16 02:24:52.584 [INFO][3750] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" Namespace="calico-system" Pod="calico-kube-controllers-7bc47596c9-jtvrh" WorkloadEndpoint="172.24.4.70-k8s-calico--kube--controllers--7bc47596c9--jtvrh-eth0" May 16 02:24:52.655143 containerd[1485]: time="2025-05-16T02:24:52.653006955Z" level=info msg="connecting to shim ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252" address="unix:///run/containerd/s/e48556abf500dcb0ad457a97fef494ca140e245aedeea38e1b5256213e41326b" namespace=k8s.io protocol=ttrpc version=3 May 16 02:24:52.667075 systemd-networkd[1388]: cali20e3de1eb26: Link UP May 16 02:24:52.672980 systemd-networkd[1388]: cali20e3de1eb26: Gained carrier May 16 02:24:52.694062 systemd[1]: Started cri-containerd-ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252.scope - libcontainer container ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252. May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.172 [INFO][3781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0 goldmane-78d55f7ddc- calico-system f2ca8e90-56b3-45b2-ba49-751853591242 2265 0 2025-05-16 02:24:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172.24.4.70 goldmane-78d55f7ddc-8ccw8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali20e3de1eb26 [] [] }} ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8ccw8" WorkloadEndpoint="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.172 [INFO][3781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8ccw8" WorkloadEndpoint="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.217 [INFO][3814] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" HandleID="k8s-pod-network.99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Workload="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.217 [INFO][3814] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" HandleID="k8s-pod-network.99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Workload="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d39b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.70", "pod":"goldmane-78d55f7ddc-8ccw8", "timestamp":"2025-05-16 02:24:52.217551819 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.218 [INFO][3814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.551 [INFO][3814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.551 [INFO][3814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.568 [INFO][3814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" host="172.24.4.70" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.588 [INFO][3814] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.602 [INFO][3814] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.606 [INFO][3814] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.611 [INFO][3814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.611 [INFO][3814] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" host="172.24.4.70" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.613 [INFO][3814] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0 May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.622 [INFO][3814] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" host="172.24.4.70" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.631 [INFO][3814] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.135/26] block=192.168.76.128/26 handle="k8s-pod-network.99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" host="172.24.4.70" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.632 [INFO][3814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.135/26] handle="k8s-pod-network.99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" host="172.24.4.70" May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.632 [INFO][3814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:24:52.698350 containerd[1485]: 2025-05-16 02:24:52.632 [INFO][3814] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.135/26] IPv6=[] ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" HandleID="k8s-pod-network.99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Workload="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0" May 16 02:24:52.699452 containerd[1485]: 2025-05-16 02:24:52.633 [INFO][3781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8ccw8" WorkloadEndpoint="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"f2ca8e90-56b3-45b2-ba49-751853591242", ResourceVersion:"2265", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"goldmane-78d55f7ddc-8ccw8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20e3de1eb26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:52.699452 containerd[1485]: 2025-05-16 02:24:52.634 [INFO][3781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.135/32] ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8ccw8" WorkloadEndpoint="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0" May 16 02:24:52.699452 containerd[1485]: 2025-05-16 02:24:52.634 [INFO][3781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20e3de1eb26 ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8ccw8" WorkloadEndpoint="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0" May 16 02:24:52.699452 containerd[1485]: 2025-05-16 02:24:52.673 [INFO][3781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8ccw8" WorkloadEndpoint="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0" May 16 02:24:52.699452 containerd[1485]: 2025-05-16 02:24:52.675 [INFO][3781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8ccw8" WorkloadEndpoint="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"f2ca8e90-56b3-45b2-ba49-751853591242", ResourceVersion:"2265", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0", Pod:"goldmane-78d55f7ddc-8ccw8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20e3de1eb26", MAC:"ae:2e:44:54:31:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:52.699452 containerd[1485]: 2025-05-16 02:24:52.694 [INFO][3781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8ccw8" WorkloadEndpoint="172.24.4.70-k8s-goldmane--78d55f7ddc--8ccw8-eth0" May 16 02:24:52.760847 systemd-networkd[1388]: calicff1b026721: Link UP May 16 02:24:52.764431 systemd-networkd[1388]: calicff1b026721: Gained carrier May 16 02:24:52.782124 containerd[1485]: time="2025-05-16T02:24:52.781895307Z" level=info msg="connecting to shim 99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0" address="unix:///run/containerd/s/f523e7b26d1fc20003c3fc11ae2da632b38fa756da5dae3b6aa446041d574d7e" namespace=k8s.io protocol=ttrpc version=3 May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.174 [INFO][3765] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0 calico-apiserver-7466bd8df7- calico-apiserver 97f42233-859e-49fa-8153-464a2bd680ed 2267 0 2025-05-16 02:24:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7466bd8df7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172.24.4.70 calico-apiserver-7466bd8df7-wrdzb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicff1b026721 [] [] }} ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-wrdzb" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.174 [INFO][3765] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-wrdzb" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.233 [INFO][3822] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" HandleID="k8s-pod-network.241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Workload="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.234 [INFO][3822] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" HandleID="k8s-pod-network.241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Workload="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9910), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172.24.4.70", "pod":"calico-apiserver-7466bd8df7-wrdzb", "timestamp":"2025-05-16 02:24:52.233962772 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.234 [INFO][3822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.633 [INFO][3822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.633 [INFO][3822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.670 [INFO][3822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" host="172.24.4.70" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.686 [INFO][3822] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.699 [INFO][3822] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.711 [INFO][3822] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.719 [INFO][3822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.720 [INFO][3822] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" host="172.24.4.70" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.722 [INFO][3822] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.733 [INFO][3822] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" host="172.24.4.70" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.744 [INFO][3822] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.136/26] block=192.168.76.128/26 handle="k8s-pod-network.241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" host="172.24.4.70" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.745 [INFO][3822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.136/26] handle="k8s-pod-network.241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" host="172.24.4.70" May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.746 [INFO][3822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:24:52.806355 containerd[1485]: 2025-05-16 02:24:52.746 [INFO][3822] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.136/26] IPv6=[] ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" HandleID="k8s-pod-network.241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Workload="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0" May 16 02:24:52.814246 containerd[1485]: 2025-05-16 02:24:52.752 [INFO][3765] cni-plugin/k8s.go 418: Populated endpoint ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-wrdzb" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0", GenerateName:"calico-apiserver-7466bd8df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"97f42233-859e-49fa-8153-464a2bd680ed", ResourceVersion:"2267", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466bd8df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"calico-apiserver-7466bd8df7-wrdzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicff1b026721", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:52.814246 containerd[1485]: 2025-05-16 02:24:52.752 [INFO][3765] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.136/32] ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-wrdzb" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0" May 16 02:24:52.814246 containerd[1485]: 2025-05-16 02:24:52.752 [INFO][3765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicff1b026721 ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-wrdzb" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0" May 16 02:24:52.814246 containerd[1485]: 2025-05-16 02:24:52.765 [INFO][3765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-wrdzb" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0" May 16 02:24:52.814246 containerd[1485]: 2025-05-16 02:24:52.766 [INFO][3765] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-wrdzb" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0", GenerateName:"calico-apiserver-7466bd8df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"97f42233-859e-49fa-8153-464a2bd680ed", ResourceVersion:"2267", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466bd8df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed", Pod:"calico-apiserver-7466bd8df7-wrdzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicff1b026721", MAC:"a6:af:f1:9e:dd:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:52.814246 containerd[1485]: 2025-05-16 02:24:52.782 [INFO][3765] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-wrdzb" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--wrdzb-eth0" May 16 02:24:52.886105 systemd[1]: Started cri-containerd-99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0.scope - libcontainer container 99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0. May 16 02:24:52.907254 containerd[1485]: time="2025-05-16T02:24:52.907207538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bc47596c9-jtvrh,Uid:0fb2f30c-5b53-402c-a680-17efdad682e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252\"" May 16 02:24:52.914578 kubelet[1896]: E0516 02:24:52.914096 1896 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition May 16 02:24:52.914578 kubelet[1896]: E0516 02:24:52.914299 1896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c4f09d7-c50b-4399-b41e-885e1dd46474-whisker-ca-bundle podName:0c4f09d7-c50b-4399-b41e-885e1dd46474 nodeName:}" failed. No retries permitted until 2025-05-16 02:24:53.414240945 +0000 UTC m=+330.414684600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/0c4f09d7-c50b-4399-b41e-885e1dd46474-whisker-ca-bundle") pod "whisker-67b68d469c-fjqlb" (UID: "0c4f09d7-c50b-4399-b41e-885e1dd46474") : failed to sync configmap cache: timed out waiting for the condition May 16 02:24:52.923520 kubelet[1896]: E0516 02:24:52.921245 1896 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition May 16 02:24:52.923520 kubelet[1896]: E0516 02:24:52.921354 1896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c4f09d7-c50b-4399-b41e-885e1dd46474-whisker-backend-key-pair podName:0c4f09d7-c50b-4399-b41e-885e1dd46474 nodeName:}" failed. No retries permitted until 2025-05-16 02:24:53.421331481 +0000 UTC m=+330.421775135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/0c4f09d7-c50b-4399-b41e-885e1dd46474-whisker-backend-key-pair") pod "whisker-67b68d469c-fjqlb" (UID: "0c4f09d7-c50b-4399-b41e-885e1dd46474") : failed to sync secret cache: timed out waiting for the condition May 16 02:24:52.942595 containerd[1485]: time="2025-05-16T02:24:52.939833547Z" level=info msg="connecting to shim 241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed" address="unix:///run/containerd/s/a11b789c764b9f731bb46a623204a178781e860e6380e1c2a9dc097848ffc58d" namespace=k8s.io protocol=ttrpc version=3 May 16 02:24:52.952073 systemd-networkd[1388]: calif661469d96d: Link UP May 16 02:24:52.954631 systemd-networkd[1388]: calif661469d96d: Gained carrier May 16 02:24:53.003936 systemd[1]: Started cri-containerd-241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed.scope - libcontainer container 241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed. May 16 02:24:53.004878 kubelet[1896]: E0516 02:24:53.004832 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.213 [INFO][3793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0 calico-apiserver-7466bd8df7- calico-apiserver 5acca69e-71f4-4b57-94b7-9788df5c994a 2266 0 2025-05-16 02:24:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7466bd8df7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172.24.4.70 calico-apiserver-7466bd8df7-ww4bk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif661469d96d [] [] }} ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-ww4bk" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.213 [INFO][3793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-ww4bk" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.266 [INFO][3834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" HandleID="k8s-pod-network.785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Workload="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.266 [INFO][3834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" HandleID="k8s-pod-network.785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Workload="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172.24.4.70", "pod":"calico-apiserver-7466bd8df7-ww4bk", "timestamp":"2025-05-16 02:24:52.266748901 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.267 [INFO][3834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.746 [INFO][3834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.746 [INFO][3834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.771 [INFO][3834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" host="172.24.4.70" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.870 [INFO][3834] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.883 [INFO][3834] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.887 [INFO][3834] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.891 [INFO][3834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.891 [INFO][3834] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" host="172.24.4.70" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.896 [INFO][3834] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354 May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.907 [INFO][3834] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" host="172.24.4.70" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.930 [INFO][3834] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.137/26] block=192.168.76.128/26 handle="k8s-pod-network.785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" host="172.24.4.70" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.931 [INFO][3834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.137/26] handle="k8s-pod-network.785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" host="172.24.4.70" May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.931 [INFO][3834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:24:53.016393 containerd[1485]: 2025-05-16 02:24:52.931 [INFO][3834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.137/26] IPv6=[] ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" HandleID="k8s-pod-network.785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Workload="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0" May 16 02:24:53.018945 containerd[1485]: 2025-05-16 02:24:52.936 [INFO][3793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-ww4bk" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0", GenerateName:"calico-apiserver-7466bd8df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"5acca69e-71f4-4b57-94b7-9788df5c994a", ResourceVersion:"2266", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466bd8df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"calico-apiserver-7466bd8df7-ww4bk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif661469d96d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:53.018945 containerd[1485]: 2025-05-16 02:24:52.937 [INFO][3793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.137/32] ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-ww4bk" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0" May 16 02:24:53.018945 containerd[1485]: 2025-05-16 02:24:52.937 [INFO][3793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif661469d96d ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-ww4bk" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0" May 16 02:24:53.018945 containerd[1485]: 2025-05-16 02:24:52.968 [INFO][3793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-ww4bk" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0" May 16 02:24:53.018945 containerd[1485]: 2025-05-16 02:24:52.973 [INFO][3793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-ww4bk" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0", GenerateName:"calico-apiserver-7466bd8df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"5acca69e-71f4-4b57-94b7-9788df5c994a", ResourceVersion:"2266", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466bd8df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354", Pod:"calico-apiserver-7466bd8df7-ww4bk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif661469d96d", MAC:"66:60:6e:2c:b8:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:53.018945 containerd[1485]: 2025-05-16 02:24:53.013 [INFO][3793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" Namespace="calico-apiserver" Pod="calico-apiserver-7466bd8df7-ww4bk" WorkloadEndpoint="172.24.4.70-k8s-calico--apiserver--7466bd8df7--ww4bk-eth0" May 16 02:24:53.037702 containerd[1485]: time="2025-05-16T02:24:53.037598039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-8ccw8,Uid:f2ca8e90-56b3-45b2-ba49-751853591242,Namespace:calico-system,Attempt:0,} returns sandbox id \"99eebc650239505d9c720c7fdfd189af04fb7335465684145de302c40e0d2af0\"" May 16 02:24:53.103372 systemd-networkd[1388]: cali3c17fb37159: Link UP May 16 02:24:53.108019 systemd-networkd[1388]: cali3c17fb37159: Gained carrier May 16 02:24:53.127269 containerd[1485]: time="2025-05-16T02:24:53.127214653Z" level=info msg="connecting to shim 785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354" address="unix:///run/containerd/s/f779e84add6b6d04478819e851e8c07683d92717ebe4e085087ab99127455e8b" namespace=k8s.io protocol=ttrpc version=3 May 16 02:24:53.129335 containerd[1485]: time="2025-05-16T02:24:53.129191191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466bd8df7-wrdzb,Uid:97f42233-859e-49fa-8153-464a2bd680ed,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed\"" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:52.341 [INFO][3843] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0 coredns-668d6bf9bc- kube-system 9c1b426b-9fc0-4446-9602-f219653933f4 2222 0 2025-05-16 02:24:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172.24.4.70 coredns-668d6bf9bc-9fsxb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c17fb37159 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fsxb" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:52.342 [INFO][3843] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fsxb" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:52.403 [INFO][3860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" HandleID="k8s-pod-network.88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Workload="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:52.403 [INFO][3860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" HandleID="k8s-pod-network.88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Workload="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3700), Attrs:map[string]string{"namespace":"kube-system", "node":"172.24.4.70", "pod":"coredns-668d6bf9bc-9fsxb", "timestamp":"2025-05-16 02:24:52.403117289 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:52.403 [INFO][3860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:52.931 [INFO][3860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:52.932 [INFO][3860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:52.961 [INFO][3860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" host="172.24.4.70" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:52.984 [INFO][3860] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:53.036 [INFO][3860] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:53.041 [INFO][3860] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:53.052 [INFO][3860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:53.052 [INFO][3860] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" host="172.24.4.70" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:53.056 [INFO][3860] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:53.063 [INFO][3860] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" host="172.24.4.70" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:53.079 [INFO][3860] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.138/26] block=192.168.76.128/26 handle="k8s-pod-network.88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" host="172.24.4.70" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:53.079 [INFO][3860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.138/26] handle="k8s-pod-network.88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" host="172.24.4.70" May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:53.079 [INFO][3860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:24:53.143936 containerd[1485]: 2025-05-16 02:24:53.079 [INFO][3860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.138/26] IPv6=[] ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" HandleID="k8s-pod-network.88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Workload="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0" May 16 02:24:53.145113 containerd[1485]: 2025-05-16 02:24:53.085 [INFO][3843] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fsxb" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c1b426b-9fc0-4446-9602-f219653933f4", ResourceVersion:"2222", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"coredns-668d6bf9bc-9fsxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c17fb37159", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:53.145113 containerd[1485]: 2025-05-16 02:24:53.088 [INFO][3843] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.138/32] ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fsxb" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0" May 16 02:24:53.145113 containerd[1485]: 2025-05-16 02:24:53.088 [INFO][3843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c17fb37159 ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fsxb" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0" May 16 02:24:53.145113 containerd[1485]: 2025-05-16 02:24:53.110 [INFO][3843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fsxb" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0" May 16 02:24:53.145113 containerd[1485]: 2025-05-16 02:24:53.117 [INFO][3843] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fsxb" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c1b426b-9fc0-4446-9602-f219653933f4", ResourceVersion:"2222", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b", Pod:"coredns-668d6bf9bc-9fsxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c17fb37159", MAC:"be:db:4e:d0:79:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:53.145113 containerd[1485]: 2025-05-16 02:24:53.142 [INFO][3843] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fsxb" WorkloadEndpoint="172.24.4.70-k8s-coredns--668d6bf9bc--9fsxb-eth0" May 16 02:24:53.177532 systemd[1]: Started cri-containerd-785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354.scope - libcontainer container 785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354. May 16 02:24:53.198456 containerd[1485]: time="2025-05-16T02:24:53.197971990Z" level=info msg="connecting to shim 88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b" address="unix:///run/containerd/s/e26f4f74b16b0614b256fc769e680f0e2a00fade678a98cec24f3b14fccaf51e" namespace=k8s.io protocol=ttrpc version=3 May 16 02:24:53.215489 containerd[1485]: time="2025-05-16T02:24:53.215377028Z" level=warning msg="container event discarded" container=cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189 type=CONTAINER_CREATED_EVENT May 16 02:24:53.215489 containerd[1485]: time="2025-05-16T02:24:53.215438433Z" level=warning msg="container event discarded" container=cc8f48a876d6ee1a8fa4e79502de451ad00e6c93be55581bc09e7506fcf52189 type=CONTAINER_STARTED_EVENT May 16 02:24:53.236974 systemd[1]: Started cri-containerd-88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b.scope - libcontainer container 88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b. May 16 02:24:53.279787 containerd[1485]: time="2025-05-16T02:24:53.277636249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466bd8df7-ww4bk,Uid:5acca69e-71f4-4b57-94b7-9788df5c994a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354\"" May 16 02:24:53.324361 containerd[1485]: time="2025-05-16T02:24:53.324234334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fsxb,Uid:9c1b426b-9fc0-4446-9602-f219653933f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b\"" May 16 02:24:53.557511 containerd[1485]: time="2025-05-16T02:24:53.557427269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b68d469c-fjqlb,Uid:0c4f09d7-c50b-4399-b41e-885e1dd46474,Namespace:calico-system,Attempt:0,}" May 16 02:24:53.737609 systemd-networkd[1388]: cali20e3de1eb26: Gained IPv6LL May 16 02:24:53.839679 systemd-networkd[1388]: cali1c4abe9b23f: Link UP May 16 02:24:53.842408 systemd-networkd[1388]: cali1c4abe9b23f: Gained carrier May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.647 [INFO][4210] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0 whisker-67b68d469c- calico-system 0c4f09d7-c50b-4399-b41e-885e1dd46474 2264 0 2025-05-16 02:24:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67b68d469c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172.24.4.70 whisker-67b68d469c-fjqlb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1c4abe9b23f [] [] }} ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Namespace="calico-system" Pod="whisker-67b68d469c-fjqlb" WorkloadEndpoint="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.648 [INFO][4210] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Namespace="calico-system" Pod="whisker-67b68d469c-fjqlb" WorkloadEndpoint="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.705 [INFO][4222] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" HandleID="k8s-pod-network.e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Workload="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.706 [INFO][4222] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" HandleID="k8s-pod-network.e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Workload="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9660), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.70", "pod":"whisker-67b68d469c-fjqlb", "timestamp":"2025-05-16 02:24:53.705911841 +0000 UTC"}, Hostname:"172.24.4.70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.706 [INFO][4222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.706 [INFO][4222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.706 [INFO][4222] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.70' May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.730 [INFO][4222] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" host="172.24.4.70" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.743 [INFO][4222] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.70" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.758 [INFO][4222] ipam/ipam.go 511: Trying affinity for 192.168.76.128/26 host="172.24.4.70" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.765 [INFO][4222] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.775 [INFO][4222] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.128/26 host="172.24.4.70" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.775 [INFO][4222] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.128/26 handle="k8s-pod-network.e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" host="172.24.4.70" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.780 [INFO][4222] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998 May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.790 [INFO][4222] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.128/26 handle="k8s-pod-network.e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" host="172.24.4.70" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.821 [INFO][4222] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.139/26] block=192.168.76.128/26 handle="k8s-pod-network.e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" host="172.24.4.70" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.822 [INFO][4222] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.139/26] handle="k8s-pod-network.e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" host="172.24.4.70" May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.822 [INFO][4222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 02:24:53.895743 containerd[1485]: 2025-05-16 02:24:53.822 [INFO][4222] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.139/26] IPv6=[] ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" HandleID="k8s-pod-network.e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Workload="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0" May 16 02:24:53.896741 containerd[1485]: 2025-05-16 02:24:53.832 [INFO][4210] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Namespace="calico-system" Pod="whisker-67b68d469c-fjqlb" WorkloadEndpoint="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0", GenerateName:"whisker-67b68d469c-", Namespace:"calico-system", SelfLink:"", UID:"0c4f09d7-c50b-4399-b41e-885e1dd46474", ResourceVersion:"2264", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67b68d469c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"", Pod:"whisker-67b68d469c-fjqlb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.139/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1c4abe9b23f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:53.896741 containerd[1485]: 2025-05-16 02:24:53.832 [INFO][4210] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.139/32] ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Namespace="calico-system" Pod="whisker-67b68d469c-fjqlb" WorkloadEndpoint="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0" May 16 02:24:53.896741 containerd[1485]: 2025-05-16 02:24:53.833 [INFO][4210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c4abe9b23f ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Namespace="calico-system" Pod="whisker-67b68d469c-fjqlb" WorkloadEndpoint="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0" May 16 02:24:53.896741 containerd[1485]: 2025-05-16 02:24:53.845 [INFO][4210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Namespace="calico-system" Pod="whisker-67b68d469c-fjqlb" WorkloadEndpoint="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0" May 16 02:24:53.896741 containerd[1485]: 2025-05-16 02:24:53.847 [INFO][4210] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Namespace="calico-system" Pod="whisker-67b68d469c-fjqlb" WorkloadEndpoint="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0", GenerateName:"whisker-67b68d469c-", Namespace:"calico-system", SelfLink:"", UID:"0c4f09d7-c50b-4399-b41e-885e1dd46474", ResourceVersion:"2264", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 2, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67b68d469c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.70", ContainerID:"e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998", Pod:"whisker-67b68d469c-fjqlb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.139/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1c4abe9b23f", MAC:"6e:89:7b:8e:db:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 02:24:53.896741 containerd[1485]: 2025-05-16 02:24:53.886 [INFO][4210] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" Namespace="calico-system" Pod="whisker-67b68d469c-fjqlb" WorkloadEndpoint="172.24.4.70-k8s-whisker--67b68d469c--fjqlb-eth0" May 16 02:24:54.003289 containerd[1485]: time="2025-05-16T02:24:54.003216224Z" level=info msg="connecting to shim e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998" address="unix:///run/containerd/s/8d39b82eca3734507c1c405094f7584b4351b0952ff90f58ee0ec2f5cce0317a" namespace=k8s.io protocol=ttrpc version=3 May 16 02:24:54.005934 kubelet[1896]: E0516 02:24:54.005846 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:54.048973 systemd[1]: Started cri-containerd-e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998.scope - libcontainer container e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998. May 16 02:24:54.056965 systemd-networkd[1388]: cali0089629ed35: Gained IPv6LL May 16 02:24:54.125896 containerd[1485]: time="2025-05-16T02:24:54.125705939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b68d469c-fjqlb,Uid:0c4f09d7-c50b-4399-b41e-885e1dd46474,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6a1413bf6ee49537ec90e695cc30b630971d4c37edb1b5eea963ce1af2f2998\"" May 16 02:24:54.249390 systemd-networkd[1388]: cali5505883f5ce: Gained IPv6LL May 16 02:24:54.379039 systemd-networkd[1388]: calif661469d96d: Gained IPv6LL May 16 02:24:54.441128 systemd-networkd[1388]: calicff1b026721: Gained IPv6LL May 16 02:24:54.889005 systemd-networkd[1388]: cali3c17fb37159: Gained IPv6LL May 16 02:24:54.922975 containerd[1485]: time="2025-05-16T02:24:54.920655253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:24:54.924801 containerd[1485]: time="2025-05-16T02:24:54.924412462Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 16 02:24:54.926670 containerd[1485]: time="2025-05-16T02:24:54.926096743Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:24:54.931892 containerd[1485]: time="2025-05-16T02:24:54.931845670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:24:54.933356 containerd[1485]: time="2025-05-16T02:24:54.933327781Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.38593101s" May 16 02:24:54.933829 containerd[1485]: time="2025-05-16T02:24:54.933808634Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 02:24:54.936310 containerd[1485]: time="2025-05-16T02:24:54.936286824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 16 02:24:54.937473 containerd[1485]: time="2025-05-16T02:24:54.937446640Z" level=info msg="CreateContainer within sandbox \"5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 02:24:54.954041 systemd-networkd[1388]: cali1c4abe9b23f: Gained IPv6LL May 16 02:24:54.960859 containerd[1485]: time="2025-05-16T02:24:54.959666122Z" level=info msg="Container d9a6867bca3f48f7e655c5b4eb30a061079c7bd60233e0f10dbcbfb326792b2f: CDI devices from CRI Config.CDIDevices: []" May 16 02:24:54.976561 containerd[1485]: time="2025-05-16T02:24:54.976518072Z" level=info msg="CreateContainer within sandbox \"5ffea7b7125d57f8d213d94acb96d8c300ee44847e22ff13de0f4668f36eabf6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9a6867bca3f48f7e655c5b4eb30a061079c7bd60233e0f10dbcbfb326792b2f\"" May 16 02:24:54.977592 containerd[1485]: time="2025-05-16T02:24:54.977556481Z" level=info msg="StartContainer for \"d9a6867bca3f48f7e655c5b4eb30a061079c7bd60233e0f10dbcbfb326792b2f\"" May 16 02:24:54.978784 containerd[1485]: time="2025-05-16T02:24:54.978724773Z" level=info msg="connecting to shim d9a6867bca3f48f7e655c5b4eb30a061079c7bd60233e0f10dbcbfb326792b2f" address="unix:///run/containerd/s/8521be5a38196790a5f1d211795dcf8c8df4c07a2bb622ddb6226bb7ea06096e" protocol=ttrpc version=3 May 16 02:24:55.003935 systemd[1]: Started cri-containerd-d9a6867bca3f48f7e655c5b4eb30a061079c7bd60233e0f10dbcbfb326792b2f.scope - libcontainer container d9a6867bca3f48f7e655c5b4eb30a061079c7bd60233e0f10dbcbfb326792b2f. May 16 02:24:55.008402 kubelet[1896]: E0516 02:24:55.008348 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:55.043837 containerd[1485]: time="2025-05-16T02:24:55.043001399Z" level=info msg="StartContainer for \"d9a6867bca3f48f7e655c5b4eb30a061079c7bd60233e0f10dbcbfb326792b2f\" returns successfully" May 16 02:24:55.338605 kubelet[1896]: I0516 02:24:55.337868 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r5njm" podStartSLOduration=1.948836999 podStartE2EDuration="4.337718205s" podCreationTimestamp="2025-05-16 02:24:51 +0000 UTC" firstStartedPulling="2025-05-16 02:24:52.54618108 +0000 UTC m=+329.546624774" lastFinishedPulling="2025-05-16 02:24:54.935062336 +0000 UTC m=+331.935505980" observedRunningTime="2025-05-16 02:24:55.33595178 +0000 UTC m=+332.336395524" watchObservedRunningTime="2025-05-16 02:24:55.337718205 +0000 UTC m=+332.338161899" May 16 02:24:56.009078 kubelet[1896]: E0516 02:24:56.008994 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:56.293181 containerd[1485]: time="2025-05-16T02:24:56.292173065Z" level=warning msg="container event discarded" container=70cd8838d16b387ccd1b75e8a1738085d6e8f2117e7232fc2b52a7296ce2a5fa type=CONTAINER_CREATED_EVENT May 16 02:24:56.365886 containerd[1485]: time="2025-05-16T02:24:56.365726829Z" level=warning msg="container event discarded" container=70cd8838d16b387ccd1b75e8a1738085d6e8f2117e7232fc2b52a7296ce2a5fa type=CONTAINER_STARTED_EVENT May 16 02:24:57.010310 kubelet[1896]: E0516 02:24:57.009554 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:58.024517 kubelet[1896]: E0516 02:24:58.022390 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:59.021178 containerd[1485]: time="2025-05-16T02:24:59.020547496Z" level=warning msg="container event discarded" container=bb187e8e938223baccfbdb85aa0702bb1ccb176e284c4bef446dcc62c48e9bea type=CONTAINER_CREATED_EVENT May 16 02:24:59.026787 kubelet[1896]: E0516 02:24:59.026435 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:24:59.104668 containerd[1485]: time="2025-05-16T02:24:59.104557188Z" level=warning msg="container event discarded" container=bb187e8e938223baccfbdb85aa0702bb1ccb176e284c4bef446dcc62c48e9bea type=CONTAINER_STARTED_EVENT May 16 02:24:59.222288 containerd[1485]: time="2025-05-16T02:24:59.222216778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:24:59.224373 containerd[1485]: time="2025-05-16T02:24:59.224078292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 16 02:24:59.225780 containerd[1485]: time="2025-05-16T02:24:59.225609635Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:24:59.230751 containerd[1485]: time="2025-05-16T02:24:59.230293183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:24:59.231427 containerd[1485]: time="2025-05-16T02:24:59.231362499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 4.29492896s" May 16 02:24:59.231594 containerd[1485]: time="2025-05-16T02:24:59.231562174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 16 02:24:59.238137 containerd[1485]: time="2025-05-16T02:24:59.238095413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 02:24:59.260892 containerd[1485]: time="2025-05-16T02:24:59.260800535Z" level=info msg="CreateContainer within sandbox \"ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 02:24:59.285318 containerd[1485]: time="2025-05-16T02:24:59.285078750Z" level=info msg="Container 98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29: CDI devices from CRI Config.CDIDevices: []" May 16 02:24:59.289062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1016275398.mount: Deactivated successfully. May 16 02:24:59.305892 containerd[1485]: time="2025-05-16T02:24:59.305640831Z" level=info msg="CreateContainer within sandbox \"ca9727b466f50acd35660708979e33352839e0cf4d0da3c8927097a70cd56252\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29\"" May 16 02:24:59.309879 containerd[1485]: time="2025-05-16T02:24:59.307633661Z" level=info msg="StartContainer for \"98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29\"" May 16 02:24:59.310721 containerd[1485]: time="2025-05-16T02:24:59.310674005Z" level=info msg="connecting to shim 98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29" address="unix:///run/containerd/s/e48556abf500dcb0ad457a97fef494ca140e245aedeea38e1b5256213e41326b" protocol=ttrpc version=3 May 16 02:24:59.360350 systemd[1]: Started cri-containerd-98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29.scope - libcontainer container 98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29. May 16 02:24:59.445353 containerd[1485]: time="2025-05-16T02:24:59.445160238Z" level=info msg="StartContainer for \"98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29\" returns successfully" May 16 02:24:59.580025 containerd[1485]: time="2025-05-16T02:24:59.579567554Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:24:59.584894 containerd[1485]: time="2025-05-16T02:24:59.581952068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:24:59.584894 containerd[1485]: time="2025-05-16T02:24:59.582246010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 02:24:59.591465 kubelet[1896]: E0516 02:24:59.583182 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 02:24:59.591465 kubelet[1896]: E0516 02:24:59.583986 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 02:24:59.592187 containerd[1485]: time="2025-05-16T02:24:59.589391317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 02:24:59.592414 kubelet[1896]: E0516 02:24:59.586409 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cztmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-8ccw8_calico-system(f2ca8e90-56b3-45b2-ba49-751853591242): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:24:59.592414 kubelet[1896]: E0516 02:24:59.588214 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8ccw8" podUID="f2ca8e90-56b3-45b2-ba49-751853591242" May 16 02:25:00.027750 kubelet[1896]: E0516 02:25:00.027675 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:00.365096 kubelet[1896]: E0516 02:25:00.363239 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8ccw8" podUID="f2ca8e90-56b3-45b2-ba49-751853591242" May 16 02:25:00.402543 kubelet[1896]: I0516 02:25:00.401264 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bc47596c9-jtvrh" podStartSLOduration=3.076270992 podStartE2EDuration="9.400748527s" podCreationTimestamp="2025-05-16 02:24:51 +0000 UTC" firstStartedPulling="2025-05-16 02:24:52.910024985 +0000 UTC m=+329.910468640" lastFinishedPulling="2025-05-16 02:24:59.234502531 +0000 UTC m=+336.234946175" observedRunningTime="2025-05-16 02:25:00.386096424 +0000 UTC m=+337.386540189" watchObservedRunningTime="2025-05-16 02:25:00.400748527 +0000 UTC m=+337.401192292" May 16 02:25:00.467724 containerd[1485]: time="2025-05-16T02:25:00.467654786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29\" id:\"6689e4edada10cd9f568775e8a8c6069b790abf6ae35a511eb950cdf124f8b95\" pid:4427 exited_at:{seconds:1747362300 nanos:466595630}" May 16 02:25:01.028634 kubelet[1896]: E0516 02:25:01.028422 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:02.029260 kubelet[1896]: E0516 02:25:02.029049 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:03.029796 kubelet[1896]: E0516 02:25:03.029672 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:03.507170 containerd[1485]: time="2025-05-16T02:25:03.507092030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:25:03.508964 containerd[1485]: time="2025-05-16T02:25:03.508697111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 16 02:25:03.510822 containerd[1485]: time="2025-05-16T02:25:03.510431245Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:25:03.513700 containerd[1485]: time="2025-05-16T02:25:03.513635377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:25:03.514782 containerd[1485]: time="2025-05-16T02:25:03.514702089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.925255187s" May 16 02:25:03.514976 containerd[1485]: time="2025-05-16T02:25:03.514955284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 16 02:25:03.517275 containerd[1485]: time="2025-05-16T02:25:03.517076604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 02:25:03.520263 containerd[1485]: time="2025-05-16T02:25:03.519317890Z" level=info msg="CreateContainer within sandbox \"241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 02:25:03.537280 containerd[1485]: time="2025-05-16T02:25:03.537243975Z" level=info msg="Container bbd8510bf81ec45ef79809c7d2e39b7d539d4f2d93fda8a39b5529f12f11c4b8: CDI devices from CRI Config.CDIDevices: []" May 16 02:25:03.551702 containerd[1485]: time="2025-05-16T02:25:03.551656407Z" level=info msg="CreateContainer within sandbox \"241632a689bd7b95d0c0769ded9cb8e6b9bed6bc83ea0f47366a783e37907fed\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bbd8510bf81ec45ef79809c7d2e39b7d539d4f2d93fda8a39b5529f12f11c4b8\"" May 16 02:25:03.553861 containerd[1485]: time="2025-05-16T02:25:03.553008805Z" level=info msg="StartContainer for \"bbd8510bf81ec45ef79809c7d2e39b7d539d4f2d93fda8a39b5529f12f11c4b8\"" May 16 02:25:03.554584 containerd[1485]: time="2025-05-16T02:25:03.554473794Z" level=info msg="connecting to shim bbd8510bf81ec45ef79809c7d2e39b7d539d4f2d93fda8a39b5529f12f11c4b8" address="unix:///run/containerd/s/a11b789c764b9f731bb46a623204a178781e860e6380e1c2a9dc097848ffc58d" protocol=ttrpc version=3 May 16 02:25:03.601177 systemd[1]: Started cri-containerd-bbd8510bf81ec45ef79809c7d2e39b7d539d4f2d93fda8a39b5529f12f11c4b8.scope - libcontainer container bbd8510bf81ec45ef79809c7d2e39b7d539d4f2d93fda8a39b5529f12f11c4b8. May 16 02:25:03.683584 containerd[1485]: time="2025-05-16T02:25:03.683496274Z" level=info msg="StartContainer for \"bbd8510bf81ec45ef79809c7d2e39b7d539d4f2d93fda8a39b5529f12f11c4b8\" returns successfully" May 16 02:25:03.712013 kubelet[1896]: E0516 02:25:03.711912 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:04.018945 containerd[1485]: time="2025-05-16T02:25:04.018861043Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:25:04.020033 containerd[1485]: time="2025-05-16T02:25:04.019895053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 16 02:25:04.027777 containerd[1485]: time="2025-05-16T02:25:04.026532788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 509.423593ms" May 16 02:25:04.027777 containerd[1485]: time="2025-05-16T02:25:04.026571521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 16 02:25:04.030608 kubelet[1896]: E0516 02:25:04.030172 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:04.031966 containerd[1485]: time="2025-05-16T02:25:04.031903996Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 02:25:04.041032 containerd[1485]: time="2025-05-16T02:25:04.040964146Z" level=info msg="CreateContainer within sandbox \"785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 02:25:04.062789 containerd[1485]: time="2025-05-16T02:25:04.060027285Z" level=info msg="Container cf598462cf3dbb0742d91d7a39f03ce97b74bc734d2bad8e583f4628cd3366da: CDI devices from CRI Config.CDIDevices: []" May 16 02:25:04.078562 containerd[1485]: time="2025-05-16T02:25:04.078518560Z" level=info msg="CreateContainer within sandbox \"785af7e4cd43e0e213f8cc4b33efb489ffa0c8e7b31875577e6e42f0ad410354\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cf598462cf3dbb0742d91d7a39f03ce97b74bc734d2bad8e583f4628cd3366da\"" May 16 02:25:04.079547 containerd[1485]: time="2025-05-16T02:25:04.079519909Z" level=info msg="StartContainer for \"cf598462cf3dbb0742d91d7a39f03ce97b74bc734d2bad8e583f4628cd3366da\"" May 16 02:25:04.084417 containerd[1485]: time="2025-05-16T02:25:04.082799664Z" level=info msg="connecting to shim cf598462cf3dbb0742d91d7a39f03ce97b74bc734d2bad8e583f4628cd3366da" address="unix:///run/containerd/s/f779e84add6b6d04478819e851e8c07683d92717ebe4e085087ab99127455e8b" protocol=ttrpc version=3 May 16 02:25:04.124162 systemd[1]: Started cri-containerd-cf598462cf3dbb0742d91d7a39f03ce97b74bc734d2bad8e583f4628cd3366da.scope - libcontainer container cf598462cf3dbb0742d91d7a39f03ce97b74bc734d2bad8e583f4628cd3366da. May 16 02:25:04.212570 containerd[1485]: time="2025-05-16T02:25:04.212460111Z" level=info msg="StartContainer for \"cf598462cf3dbb0742d91d7a39f03ce97b74bc734d2bad8e583f4628cd3366da\" returns successfully" May 16 02:25:04.217793 containerd[1485]: time="2025-05-16T02:25:04.217371857Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 02:25:04.219822 containerd[1485]: time="2025-05-16T02:25:04.219724441Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" May 16 02:25:04.225064 containerd[1485]: time="2025-05-16T02:25:04.225008145Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 193.02473ms" May 16 02:25:04.225148 containerd[1485]: time="2025-05-16T02:25:04.225081042Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 02:25:04.227440 containerd[1485]: time="2025-05-16T02:25:04.227232117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 02:25:04.229369 containerd[1485]: time="2025-05-16T02:25:04.229038617Z" level=info msg="CreateContainer within sandbox \"88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 02:25:04.250790 containerd[1485]: time="2025-05-16T02:25:04.249298542Z" level=info msg="Container 162b13ed7f32bd01121f0b18dad8f16e80c6e620bf2c4fd812662276b9adbc08: CDI devices from CRI Config.CDIDevices: []" May 16 02:25:04.265206 containerd[1485]: time="2025-05-16T02:25:04.265052642Z" level=info msg="CreateContainer within sandbox \"88d6412352711b2ef6557283f13ab08ea5867212c81cae28e6e543f37b64e74b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"162b13ed7f32bd01121f0b18dad8f16e80c6e620bf2c4fd812662276b9adbc08\"" May 16 02:25:04.269098 containerd[1485]: time="2025-05-16T02:25:04.268957017Z" level=info msg="StartContainer for \"162b13ed7f32bd01121f0b18dad8f16e80c6e620bf2c4fd812662276b9adbc08\"" May 16 02:25:04.270284 containerd[1485]: time="2025-05-16T02:25:04.270003231Z" level=info msg="connecting to shim 162b13ed7f32bd01121f0b18dad8f16e80c6e620bf2c4fd812662276b9adbc08" address="unix:///run/containerd/s/e26f4f74b16b0614b256fc769e680f0e2a00fade678a98cec24f3b14fccaf51e" protocol=ttrpc version=3 May 16 02:25:04.306380 systemd[1]: Started cri-containerd-162b13ed7f32bd01121f0b18dad8f16e80c6e620bf2c4fd812662276b9adbc08.scope - libcontainer container 162b13ed7f32bd01121f0b18dad8f16e80c6e620bf2c4fd812662276b9adbc08. May 16 02:25:04.355615 containerd[1485]: time="2025-05-16T02:25:04.355564723Z" level=info msg="StartContainer for \"162b13ed7f32bd01121f0b18dad8f16e80c6e620bf2c4fd812662276b9adbc08\" returns successfully" May 16 02:25:04.433782 kubelet[1896]: I0516 02:25:04.430465 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9fsxb" podStartSLOduration=2.5314839239999998 podStartE2EDuration="13.430437339s" podCreationTimestamp="2025-05-16 02:24:51 +0000 UTC" firstStartedPulling="2025-05-16 02:24:53.327091665 +0000 UTC m=+330.327535319" lastFinishedPulling="2025-05-16 02:25:04.22604508 +0000 UTC m=+341.226488734" observedRunningTime="2025-05-16 02:25:04.428965167 +0000 UTC m=+341.429408841" watchObservedRunningTime="2025-05-16 02:25:04.430437339 +0000 UTC m=+341.430880983" May 16 02:25:04.490144 kubelet[1896]: I0516 02:25:04.489519 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7466bd8df7-wrdzb" podStartSLOduration=3.105235382 podStartE2EDuration="13.489490894s" podCreationTimestamp="2025-05-16 02:24:51 +0000 UTC" firstStartedPulling="2025-05-16 02:24:53.13269191 +0000 UTC m=+330.133135564" lastFinishedPulling="2025-05-16 02:25:03.516947432 +0000 UTC m=+340.517391076" observedRunningTime="2025-05-16 02:25:04.463589494 +0000 UTC m=+341.464033158" watchObservedRunningTime="2025-05-16 02:25:04.489490894 +0000 UTC m=+341.489934538" May 16 02:25:04.578849 containerd[1485]: time="2025-05-16T02:25:04.578717373Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:25:04.580365 containerd[1485]: time="2025-05-16T02:25:04.580310502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:25:04.580662 containerd[1485]: time="2025-05-16T02:25:04.580439634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 02:25:04.582582 kubelet[1896]: E0516 02:25:04.580902 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 02:25:04.582582 kubelet[1896]: E0516 02:25:04.580976 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 02:25:04.582582 kubelet[1896]: E0516 02:25:04.581205 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9b6e19f8cabf47cabf4f655bab5b819e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6bdfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b68d469c-fjqlb_calico-system(0c4f09d7-c50b-4399-b41e-885e1dd46474): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:25:04.583888 containerd[1485]: time="2025-05-16T02:25:04.583859371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 02:25:04.959836 containerd[1485]: time="2025-05-16T02:25:04.959777632Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:25:04.961508 containerd[1485]: time="2025-05-16T02:25:04.961465308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:25:04.961508 containerd[1485]: time="2025-05-16T02:25:04.961547142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 02:25:04.962149 kubelet[1896]: E0516 02:25:04.961984 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 02:25:04.962149 kubelet[1896]: E0516 02:25:04.962069 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 02:25:04.964939 kubelet[1896]: E0516 02:25:04.964873 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bdfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b68d469c-fjqlb_calico-system(0c4f09d7-c50b-4399-b41e-885e1dd46474): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:25:04.966265 kubelet[1896]: E0516 02:25:04.966169 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-67b68d469c-fjqlb" podUID="0c4f09d7-c50b-4399-b41e-885e1dd46474" May 16 02:25:05.030508 kubelet[1896]: E0516 02:25:05.030416 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:05.409546 kubelet[1896]: E0516 02:25:05.409353 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-67b68d469c-fjqlb" podUID="0c4f09d7-c50b-4399-b41e-885e1dd46474" May 16 02:25:05.437113 kubelet[1896]: I0516 02:25:05.436883 1896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7466bd8df7-ww4bk" podStartSLOduration=3.6878508500000002 podStartE2EDuration="14.436842106s" podCreationTimestamp="2025-05-16 02:24:51 +0000 UTC" firstStartedPulling="2025-05-16 02:24:53.280485946 +0000 UTC m=+330.280929590" lastFinishedPulling="2025-05-16 02:25:04.029477202 +0000 UTC m=+341.029920846" observedRunningTime="2025-05-16 02:25:04.500879322 +0000 UTC m=+341.501322996" watchObservedRunningTime="2025-05-16 02:25:05.436842106 +0000 UTC m=+342.437285800" May 16 02:25:06.031667 kubelet[1896]: E0516 02:25:06.031590 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:06.406577 kubelet[1896]: I0516 02:25:06.406428 1896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 02:25:07.033055 kubelet[1896]: E0516 02:25:07.032937 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:08.034356 kubelet[1896]: E0516 02:25:08.034216 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:09.035240 kubelet[1896]: E0516 02:25:09.035116 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:10.035978 kubelet[1896]: E0516 02:25:10.035875 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:11.039016 kubelet[1896]: E0516 02:25:11.037661 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:12.039444 kubelet[1896]: E0516 02:25:12.039247 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:13.040148 kubelet[1896]: E0516 02:25:13.039979 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:14.040440 kubelet[1896]: E0516 02:25:14.040268 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:15.041329 kubelet[1896]: E0516 02:25:15.041207 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:15.852890 containerd[1485]: time="2025-05-16T02:25:15.851271730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 02:25:16.042325 kubelet[1896]: E0516 02:25:16.042223 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:16.213425 containerd[1485]: time="2025-05-16T02:25:16.212974948Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:25:16.217455 containerd[1485]: time="2025-05-16T02:25:16.217166153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 02:25:16.217714 containerd[1485]: time="2025-05-16T02:25:16.217330591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:25:16.220081 kubelet[1896]: E0516 02:25:16.218505 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 02:25:16.220081 kubelet[1896]: E0516 02:25:16.218665 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 02:25:16.220081 kubelet[1896]: E0516 02:25:16.219279 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cztmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-8ccw8_calico-system(f2ca8e90-56b3-45b2-ba49-751853591242): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:25:16.221248 kubelet[1896]: E0516 02:25:16.221016 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8ccw8" podUID="f2ca8e90-56b3-45b2-ba49-751853591242" May 16 02:25:17.043307 kubelet[1896]: E0516 02:25:17.043153 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:18.043927 kubelet[1896]: E0516 02:25:18.043819 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:19.045372 kubelet[1896]: E0516 02:25:19.045137 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:19.850864 containerd[1485]: time="2025-05-16T02:25:19.850511480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 02:25:20.045723 kubelet[1896]: E0516 02:25:20.045649 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:20.192821 containerd[1485]: time="2025-05-16T02:25:20.192138527Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:25:20.195130 containerd[1485]: time="2025-05-16T02:25:20.194691477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:25:20.195130 containerd[1485]: time="2025-05-16T02:25:20.194719569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 02:25:20.196542 kubelet[1896]: E0516 02:25:20.195731 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 02:25:20.196542 kubelet[1896]: E0516 02:25:20.195925 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 02:25:20.196542 kubelet[1896]: E0516 02:25:20.196310 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9b6e19f8cabf47cabf4f655bab5b819e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6bdfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b68d469c-fjqlb_calico-system(0c4f09d7-c50b-4399-b41e-885e1dd46474): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:25:20.201091 containerd[1485]: time="2025-05-16T02:25:20.200986949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 02:25:20.534234 containerd[1485]: time="2025-05-16T02:25:20.533647272Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:25:20.536254 containerd[1485]: time="2025-05-16T02:25:20.536070408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:25:20.537013 containerd[1485]: time="2025-05-16T02:25:20.536151831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 02:25:20.537880 kubelet[1896]: E0516 02:25:20.537600 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 02:25:20.538289 kubelet[1896]: E0516 02:25:20.537832 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 02:25:20.539198 kubelet[1896]: E0516 02:25:20.538618 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bdfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b68d469c-fjqlb_calico-system(0c4f09d7-c50b-4399-b41e-885e1dd46474): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:25:20.540221 kubelet[1896]: E0516 02:25:20.540009 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-67b68d469c-fjqlb" podUID="0c4f09d7-c50b-4399-b41e-885e1dd46474" May 16 02:25:21.048043 kubelet[1896]: E0516 02:25:21.047903 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:21.230243 containerd[1485]: time="2025-05-16T02:25:21.229744494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"458cd03d349c2816a443671774001fd73ae556e991df5a45df077dc8675de2dd\" pid:4584 exited_at:{seconds:1747362321 nanos:225866548}" May 16 02:25:22.048631 kubelet[1896]: E0516 02:25:22.048510 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:23.049337 kubelet[1896]: E0516 02:25:23.049232 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:23.711302 kubelet[1896]: E0516 02:25:23.711182 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:24.050077 kubelet[1896]: E0516 02:25:24.049807 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:25.050824 kubelet[1896]: E0516 02:25:25.050717 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:26.051592 kubelet[1896]: E0516 02:25:26.051468 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:27.051961 kubelet[1896]: E0516 02:25:27.051831 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:28.053326 kubelet[1896]: E0516 02:25:28.053108 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:28.845485 kubelet[1896]: E0516 02:25:28.845340 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8ccw8" podUID="f2ca8e90-56b3-45b2-ba49-751853591242" May 16 02:25:29.053958 kubelet[1896]: E0516 02:25:29.053837 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:30.054512 kubelet[1896]: E0516 02:25:30.054384 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:30.485589 containerd[1485]: time="2025-05-16T02:25:30.485392705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29\" id:\"c0720720234b33b204f9299ca1613b9b6358d0b042face48e2b447f0c6c4f131\" pid:4615 exited_at:{seconds:1747362330 nanos:484390314}" May 16 02:25:31.055421 kubelet[1896]: E0516 02:25:31.055286 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:32.056411 kubelet[1896]: E0516 02:25:32.056224 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:33.057019 kubelet[1896]: E0516 02:25:33.056757 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:33.853415 kubelet[1896]: E0516 02:25:33.852905 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-67b68d469c-fjqlb" podUID="0c4f09d7-c50b-4399-b41e-885e1dd46474" May 16 02:25:34.057954 kubelet[1896]: E0516 02:25:34.057880 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:35.058638 kubelet[1896]: E0516 02:25:35.058493 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:36.059151 kubelet[1896]: E0516 02:25:36.058991 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:37.061575 kubelet[1896]: E0516 02:25:37.061407 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:38.062080 kubelet[1896]: E0516 02:25:38.061948 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:39.062598 kubelet[1896]: E0516 02:25:39.062477 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:40.063757 kubelet[1896]: E0516 02:25:40.063665 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:40.848166 containerd[1485]: time="2025-05-16T02:25:40.848049410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 02:25:41.064710 kubelet[1896]: E0516 02:25:41.064586 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:41.212338 containerd[1485]: time="2025-05-16T02:25:41.212201179Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:25:41.216826 containerd[1485]: time="2025-05-16T02:25:41.215093997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:25:41.216826 containerd[1485]: time="2025-05-16T02:25:41.215397085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 02:25:41.217884 kubelet[1896]: E0516 02:25:41.217739 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 02:25:41.218482 kubelet[1896]: E0516 02:25:41.218350 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 02:25:41.220103 kubelet[1896]: E0516 02:25:41.219832 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cztmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-8ccw8_calico-system(f2ca8e90-56b3-45b2-ba49-751853591242): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:25:41.222641 kubelet[1896]: E0516 02:25:41.222119 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8ccw8" podUID="f2ca8e90-56b3-45b2-ba49-751853591242" May 16 02:25:42.065654 kubelet[1896]: E0516 02:25:42.065543 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:43.066074 kubelet[1896]: E0516 02:25:43.065914 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:43.711529 kubelet[1896]: E0516 02:25:43.711420 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:44.066574 kubelet[1896]: E0516 02:25:44.066244 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:45.067137 kubelet[1896]: E0516 02:25:45.066985 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:46.067382 kubelet[1896]: E0516 02:25:46.067243 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:47.068148 kubelet[1896]: E0516 02:25:47.068028 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:47.849008 containerd[1485]: time="2025-05-16T02:25:47.848145069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 02:25:48.069404 kubelet[1896]: E0516 02:25:48.069274 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:48.217060 containerd[1485]: time="2025-05-16T02:25:48.216894249Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:25:48.219923 containerd[1485]: time="2025-05-16T02:25:48.219591200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:25:48.219923 containerd[1485]: time="2025-05-16T02:25:48.219725101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 02:25:48.220280 kubelet[1896]: E0516 02:25:48.220151 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 02:25:48.220556 kubelet[1896]: E0516 02:25:48.220286 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 02:25:48.220885 kubelet[1896]: E0516 02:25:48.220696 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9b6e19f8cabf47cabf4f655bab5b819e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6bdfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b68d469c-fjqlb_calico-system(0c4f09d7-c50b-4399-b41e-885e1dd46474): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:25:48.224519 containerd[1485]: time="2025-05-16T02:25:48.223994151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 02:25:48.613447 containerd[1485]: time="2025-05-16T02:25:48.613198753Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:25:48.616514 containerd[1485]: time="2025-05-16T02:25:48.616254516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:25:48.616851 containerd[1485]: time="2025-05-16T02:25:48.616362158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 02:25:48.616975 kubelet[1896]: E0516 02:25:48.616733 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 02:25:48.616975 kubelet[1896]: E0516 02:25:48.616895 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 02:25:48.617287 kubelet[1896]: E0516 02:25:48.617169 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bdfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b68d469c-fjqlb_calico-system(0c4f09d7-c50b-4399-b41e-885e1dd46474): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:25:48.619233 kubelet[1896]: E0516 02:25:48.619108 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-67b68d469c-fjqlb" podUID="0c4f09d7-c50b-4399-b41e-885e1dd46474" May 16 02:25:49.070560 kubelet[1896]: E0516 02:25:49.070381 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:50.070841 kubelet[1896]: E0516 02:25:50.070667 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:51.072169 kubelet[1896]: E0516 02:25:51.072060 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:51.202275 containerd[1485]: time="2025-05-16T02:25:51.202211376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"9e2d5382ff68f1c599a8813f842d05cb8b15fe266e57301085781c7e2b1da708\" pid:4638 exited_at:{seconds:1747362351 nanos:201507254}" May 16 02:25:52.074803 kubelet[1896]: E0516 02:25:52.073693 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:52.158755 containerd[1485]: time="2025-05-16T02:25:52.158676565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29\" id:\"0209012f950f1bc3a1c8134c6dc89b8b171c5a4f4c9b67f68fcd8992994b11a1\" pid:4660 exited_at:{seconds:1747362352 nanos:157468488}" May 16 02:25:53.075501 kubelet[1896]: E0516 02:25:53.075265 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:53.849508 kubelet[1896]: E0516 02:25:53.848142 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8ccw8" podUID="f2ca8e90-56b3-45b2-ba49-751853591242" May 16 02:25:54.076209 kubelet[1896]: E0516 02:25:54.076088 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:55.077488 kubelet[1896]: E0516 02:25:55.077379 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:56.078120 kubelet[1896]: E0516 02:25:56.077947 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:57.079335 kubelet[1896]: E0516 02:25:57.079229 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:58.079756 kubelet[1896]: E0516 02:25:58.079654 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:25:59.080960 kubelet[1896]: E0516 02:25:59.080854 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:00.081618 kubelet[1896]: E0516 02:26:00.081491 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:00.493687 containerd[1485]: time="2025-05-16T02:26:00.493156400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29\" id:\"d1a8045c36d54b8778160722cf74cf4c4736b768671e8b63f6abd4b6269fbc6a\" pid:4686 exited_at:{seconds:1747362360 nanos:492185278}" May 16 02:26:01.083309 kubelet[1896]: E0516 02:26:01.082944 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:02.083812 kubelet[1896]: E0516 02:26:02.083591 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:02.848043 kubelet[1896]: E0516 02:26:02.847870 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-67b68d469c-fjqlb" podUID="0c4f09d7-c50b-4399-b41e-885e1dd46474" May 16 02:26:03.084361 kubelet[1896]: E0516 02:26:03.084202 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:03.711375 kubelet[1896]: E0516 02:26:03.711270 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:04.085541 kubelet[1896]: E0516 02:26:04.085125 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:04.848701 kubelet[1896]: E0516 02:26:04.848476 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8ccw8" podUID="f2ca8e90-56b3-45b2-ba49-751853591242" May 16 02:26:05.085914 kubelet[1896]: E0516 02:26:05.085750 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:06.087100 kubelet[1896]: E0516 02:26:06.086963 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:07.088173 kubelet[1896]: E0516 02:26:07.088055 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:08.089051 kubelet[1896]: E0516 02:26:08.088951 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:09.089405 kubelet[1896]: E0516 02:26:09.089285 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:10.090380 kubelet[1896]: E0516 02:26:10.090228 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:11.091574 kubelet[1896]: E0516 02:26:11.091459 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:12.092194 kubelet[1896]: E0516 02:26:12.092047 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:13.093204 kubelet[1896]: E0516 02:26:13.093095 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:14.094075 kubelet[1896]: E0516 02:26:14.093938 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:15.094750 kubelet[1896]: E0516 02:26:15.094636 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:16.096104 kubelet[1896]: E0516 02:26:16.095823 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:17.096556 kubelet[1896]: E0516 02:26:17.096438 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:17.850903 kubelet[1896]: E0516 02:26:17.850554 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-67b68d469c-fjqlb" podUID="0c4f09d7-c50b-4399-b41e-885e1dd46474" May 16 02:26:18.096905 kubelet[1896]: E0516 02:26:18.096710 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:19.098155 kubelet[1896]: E0516 02:26:19.097750 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:19.846208 kubelet[1896]: E0516 02:26:19.845568 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8ccw8" podUID="f2ca8e90-56b3-45b2-ba49-751853591242" May 16 02:26:20.099625 kubelet[1896]: E0516 02:26:20.099179 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:21.099978 kubelet[1896]: E0516 02:26:21.099894 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:21.266340 containerd[1485]: time="2025-05-16T02:26:21.266181761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5859f6892a7aac768848eb34b5ef9638373110973a43ef352ad25484e89a535b\" id:\"e446a5db3bf9cbbc2824837c5ea1a69df094e883d948f616cfb3f4618fcc6c02\" pid:4731 exited_at:{seconds:1747362381 nanos:264760525}" May 16 02:26:22.101054 kubelet[1896]: E0516 02:26:22.100914 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:23.106022 kubelet[1896]: E0516 02:26:23.104482 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:23.712148 kubelet[1896]: E0516 02:26:23.711894 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:24.106437 kubelet[1896]: E0516 02:26:24.105638 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:25.107503 kubelet[1896]: E0516 02:26:25.107376 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:26.108872 kubelet[1896]: E0516 02:26:26.108679 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:27.110250 kubelet[1896]: E0516 02:26:27.110143 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:28.111092 kubelet[1896]: E0516 02:26:28.110941 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:29.112016 kubelet[1896]: E0516 02:26:29.111915 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:30.112699 kubelet[1896]: E0516 02:26:30.112577 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:30.506355 containerd[1485]: time="2025-05-16T02:26:30.505566425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c5583d37f3f769ff0babbaad35cb7c7c43870f33f89ddd2538b2b85f14fb29\" id:\"93007e60dd76e0ca2d68b26edf3c181ac1077f6d48a63903e248cc1fb23ed209\" pid:4760 exited_at:{seconds:1747362390 nanos:503692929}" May 16 02:26:31.113185 kubelet[1896]: E0516 02:26:31.113051 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:32.114057 kubelet[1896]: E0516 02:26:32.113938 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:32.860859 containerd[1485]: time="2025-05-16T02:26:32.858931619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 02:26:33.115719 kubelet[1896]: E0516 02:26:33.115098 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:33.233806 containerd[1485]: time="2025-05-16T02:26:33.233678159Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:26:33.236351 containerd[1485]: time="2025-05-16T02:26:33.236135259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:26:33.237947 containerd[1485]: time="2025-05-16T02:26:33.236200862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 02:26:33.238070 kubelet[1896]: E0516 02:26:33.236946 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 02:26:33.238070 kubelet[1896]: E0516 02:26:33.237154 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 02:26:33.238070 kubelet[1896]: E0516 02:26:33.237719 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9b6e19f8cabf47cabf4f655bab5b819e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6bdfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b68d469c-fjqlb_calico-system(0c4f09d7-c50b-4399-b41e-885e1dd46474): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:26:33.241211 containerd[1485]: time="2025-05-16T02:26:33.241014084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 02:26:33.583982 containerd[1485]: time="2025-05-16T02:26:33.583742300Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:26:33.586831 containerd[1485]: time="2025-05-16T02:26:33.586153203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 02:26:33.586831 containerd[1485]: time="2025-05-16T02:26:33.586154496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:26:33.587750 kubelet[1896]: E0516 02:26:33.587622 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 02:26:33.589348 kubelet[1896]: E0516 02:26:33.587824 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 02:26:33.589348 kubelet[1896]: E0516 02:26:33.588313 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bdfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b68d469c-fjqlb_calico-system(0c4f09d7-c50b-4399-b41e-885e1dd46474): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:26:33.590423 kubelet[1896]: E0516 02:26:33.590308 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-67b68d469c-fjqlb" podUID="0c4f09d7-c50b-4399-b41e-885e1dd46474" May 16 02:26:33.851845 containerd[1485]: time="2025-05-16T02:26:33.851078835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 02:26:34.116607 kubelet[1896]: E0516 02:26:34.116294 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:34.207419 containerd[1485]: time="2025-05-16T02:26:34.207329332Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 02:26:34.209567 containerd[1485]: time="2025-05-16T02:26:34.209468004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 02:26:34.210302 containerd[1485]: time="2025-05-16T02:26:34.209667178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 02:26:34.210675 kubelet[1896]: E0516 02:26:34.210092 1896 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 02:26:34.210675 kubelet[1896]: E0516 02:26:34.210210 1896 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 02:26:34.211494 kubelet[1896]: E0516 02:26:34.210545 1896 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cztmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-8ccw8_calico-system(f2ca8e90-56b3-45b2-ba49-751853591242): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 02:26:34.212838 kubelet[1896]: E0516 02:26:34.212724 1896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8ccw8" podUID="f2ca8e90-56b3-45b2-ba49-751853591242" May 16 02:26:35.118699 kubelet[1896]: E0516 02:26:35.118591 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:36.119647 kubelet[1896]: E0516 02:26:36.119406 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:37.119883 kubelet[1896]: E0516 02:26:37.119753 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:38.121053 kubelet[1896]: E0516 02:26:38.120929 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:39.121714 kubelet[1896]: E0516 02:26:39.121605 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:40.122729 kubelet[1896]: E0516 02:26:40.122636 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:41.123152 kubelet[1896]: E0516 02:26:41.123040 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:42.124338 kubelet[1896]: E0516 02:26:42.124179 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 02:26:43.125344 kubelet[1896]: E0516 02:26:43.125245 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"