May 17 03:46:15.079400 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 03:46:15.079427 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 03:46:15.079437 kernel: BIOS-provided physical RAM map: May 17 03:46:15.079445 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 03:46:15.079452 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 03:46:15.079462 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 03:46:15.079470 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 17 03:46:15.079478 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 17 03:46:15.079485 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 03:46:15.079492 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 03:46:15.079500 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 17 03:46:15.079507 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 03:46:15.079514 kernel: NX (Execute Disable) protection: active May 17 03:46:15.079524 kernel: APIC: Static calls initialized May 17 03:46:15.079533 kernel: SMBIOS 3.0.0 present. May 17 03:46:15.079541 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 17 03:46:15.079549 kernel: Hypervisor detected: KVM May 17 03:46:15.079556 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 03:46:15.079564 kernel: kvm-clock: using sched offset of 3424942764 cycles May 17 03:46:15.079574 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 03:46:15.079582 kernel: tsc: Detected 1996.249 MHz processor May 17 03:46:15.079590 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 03:46:15.079598 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 03:46:15.079606 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 17 03:46:15.079614 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 03:46:15.079622 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 03:46:15.079630 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 17 03:46:15.079638 kernel: ACPI: Early table checksum verification disabled May 17 03:46:15.079647 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 17 03:46:15.079655 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 03:46:15.079663 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 03:46:15.079671 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 03:46:15.079679 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 17 03:46:15.079687 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 03:46:15.079694 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 03:46:15.079702 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 17 03:46:15.079713 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 17 03:46:15.079721 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 17 03:46:15.079728 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 17 03:46:15.079737 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 17 03:46:15.079748 kernel: No NUMA configuration found May 17 03:46:15.079756 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 17 03:46:15.079764 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 17 03:46:15.079775 kernel: Zone ranges: May 17 03:46:15.079783 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 03:46:15.079791 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 03:46:15.079799 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 17 03:46:15.079807 kernel: Movable zone start for each node May 17 03:46:15.079816 kernel: Early memory node ranges May 17 03:46:15.079824 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 03:46:15.079832 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 17 03:46:15.079842 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 17 03:46:15.079850 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 17 03:46:15.079858 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 03:46:15.079866 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 03:46:15.079874 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 17 03:46:15.079883 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 03:46:15.079891 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 03:46:15.079900 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 03:46:15.079908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 03:46:15.079920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 03:46:15.079929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 03:46:15.079937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 03:46:15.079945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 03:46:15.079953 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 03:46:15.079961 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 03:46:15.079970 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 03:46:15.079978 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 17 03:46:15.079986 kernel: Booting paravirtualized kernel on KVM May 17 03:46:15.079998 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 03:46:15.080007 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 03:46:15.080015 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 03:46:15.080023 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 03:46:15.080031 kernel: pcpu-alloc: [0] 0 1 May 17 03:46:15.080039 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 03:46:15.080049 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 03:46:15.080058 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 03:46:15.080068 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 03:46:15.080077 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 03:46:15.080085 kernel: Fallback order for Node 0: 0 May 17 03:46:15.080093 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 17 03:46:15.080101 kernel: Policy zone: Normal May 17 03:46:15.080110 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 03:46:15.080118 kernel: software IO TLB: area num 2. May 17 03:46:15.080126 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 227308K reserved, 0K cma-reserved) May 17 03:46:15.080135 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 03:46:15.080145 kernel: ftrace: allocating 37948 entries in 149 pages May 17 03:46:15.080153 kernel: ftrace: allocated 149 pages with 4 groups May 17 03:46:15.080162 kernel: Dynamic Preempt: voluntary May 17 03:46:15.080170 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 03:46:15.080179 kernel: rcu: RCU event tracing is enabled. May 17 03:46:15.080187 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 03:46:15.080212 kernel: Trampoline variant of Tasks RCU enabled. May 17 03:46:15.080221 kernel: Rude variant of Tasks RCU enabled. May 17 03:46:15.080229 kernel: Tracing variant of Tasks RCU enabled. May 17 03:46:15.080241 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 03:46:15.080250 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 03:46:15.080258 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 03:46:15.080266 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 03:46:15.080274 kernel: Console: colour VGA+ 80x25 May 17 03:46:15.080282 kernel: printk: console [tty0] enabled May 17 03:46:15.080290 kernel: printk: console [ttyS0] enabled May 17 03:46:15.080299 kernel: ACPI: Core revision 20230628 May 17 03:46:15.080307 kernel: APIC: Switch to symmetric I/O mode setup May 17 03:46:15.080317 kernel: x2apic enabled May 17 03:46:15.080325 kernel: APIC: Switched APIC routing to: physical x2apic May 17 03:46:15.080333 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 03:46:15.080342 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 03:46:15.080350 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 17 03:46:15.080358 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 17 03:46:15.080367 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 17 03:46:15.080375 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 03:46:15.080383 kernel: Spectre V2 : Mitigation: Retpolines May 17 03:46:15.080393 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 03:46:15.080402 kernel: Speculative Store Bypass: Vulnerable May 17 03:46:15.080410 kernel: x86/fpu: x87 FPU will use FXSAVE May 17 03:46:15.080418 kernel: Freeing SMP alternatives memory: 32K May 17 03:46:15.080426 kernel: pid_max: default: 32768 minimum: 301 May 17 03:46:15.080441 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 03:46:15.080452 kernel: landlock: Up and running. May 17 03:46:15.080461 kernel: SELinux: Initializing. May 17 03:46:15.080470 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 03:46:15.080478 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 03:46:15.080487 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 17 03:46:15.080496 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 03:46:15.080508 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 03:46:15.080517 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 03:46:15.080526 kernel: Performance Events: AMD PMU driver. May 17 03:46:15.080534 kernel: ... version: 0 May 17 03:46:15.080543 kernel: ... bit width: 48 May 17 03:46:15.080554 kernel: ... generic registers: 4 May 17 03:46:15.080563 kernel: ... value mask: 0000ffffffffffff May 17 03:46:15.080571 kernel: ... max period: 00007fffffffffff May 17 03:46:15.080580 kernel: ... fixed-purpose events: 0 May 17 03:46:15.080588 kernel: ... event mask: 000000000000000f May 17 03:46:15.080597 kernel: signal: max sigframe size: 1440 May 17 03:46:15.080606 kernel: rcu: Hierarchical SRCU implementation. May 17 03:46:15.080614 kernel: rcu: Max phase no-delay instances is 400. May 17 03:46:15.080623 kernel: smp: Bringing up secondary CPUs ... May 17 03:46:15.080634 kernel: smpboot: x86: Booting SMP configuration: May 17 03:46:15.080643 kernel: .... node #0, CPUs: #1 May 17 03:46:15.080651 kernel: smp: Brought up 1 node, 2 CPUs May 17 03:46:15.080660 kernel: smpboot: Max logical packages: 2 May 17 03:46:15.080669 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 17 03:46:15.080678 kernel: devtmpfs: initialized May 17 03:46:15.080686 kernel: x86/mm: Memory block size: 128MB May 17 03:46:15.080695 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 03:46:15.080704 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 03:46:15.080715 kernel: pinctrl core: initialized pinctrl subsystem May 17 03:46:15.080724 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 03:46:15.080732 kernel: audit: initializing netlink subsys (disabled) May 17 03:46:15.080741 kernel: audit: type=2000 audit(1747453573.961:1): state=initialized audit_enabled=0 res=1 May 17 03:46:15.080749 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 03:46:15.080758 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 03:46:15.080767 kernel: cpuidle: using governor menu May 17 03:46:15.080775 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 03:46:15.080784 kernel: dca service started, version 1.12.1 May 17 03:46:15.080796 kernel: PCI: Using configuration type 1 for base access May 17 03:46:15.080805 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 03:46:15.080814 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 03:46:15.080823 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 03:46:15.080831 kernel: ACPI: Added _OSI(Module Device) May 17 03:46:15.080840 kernel: ACPI: Added _OSI(Processor Device) May 17 03:46:15.080849 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 03:46:15.080857 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 03:46:15.080866 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 03:46:15.080878 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 03:46:15.080887 kernel: ACPI: Interpreter enabled May 17 03:46:15.080896 kernel: ACPI: PM: (supports S0 S3 S5) May 17 03:46:15.080904 kernel: ACPI: Using IOAPIC for interrupt routing May 17 03:46:15.080913 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 03:46:15.080922 kernel: PCI: Using E820 reservations for host bridge windows May 17 03:46:15.080930 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 03:46:15.080939 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 03:46:15.081083 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 03:46:15.081186 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 17 03:46:15.083512 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 17 03:46:15.083528 kernel: acpiphp: Slot [3] registered May 17 03:46:15.083538 kernel: acpiphp: Slot [4] registered May 17 03:46:15.083547 kernel: acpiphp: Slot [5] registered May 17 03:46:15.083556 kernel: acpiphp: Slot [6] registered May 17 03:46:15.083564 kernel: acpiphp: Slot [7] registered May 17 03:46:15.083573 kernel: acpiphp: Slot [8] registered May 17 03:46:15.083587 kernel: acpiphp: Slot [9] registered May 17 03:46:15.083595 kernel: acpiphp: Slot [10] registered May 17 03:46:15.083604 kernel: acpiphp: Slot [11] registered May 17 03:46:15.083613 kernel: acpiphp: Slot [12] registered May 17 03:46:15.083621 kernel: acpiphp: Slot [13] registered May 17 03:46:15.083630 kernel: acpiphp: Slot [14] registered May 17 03:46:15.083638 kernel: acpiphp: Slot [15] registered May 17 03:46:15.083647 kernel: acpiphp: Slot [16] registered May 17 03:46:15.083656 kernel: acpiphp: Slot [17] registered May 17 03:46:15.083667 kernel: acpiphp: Slot [18] registered May 17 03:46:15.083675 kernel: acpiphp: Slot [19] registered May 17 03:46:15.083684 kernel: acpiphp: Slot [20] registered May 17 03:46:15.083692 kernel: acpiphp: Slot [21] registered May 17 03:46:15.083701 kernel: acpiphp: Slot [22] registered May 17 03:46:15.083709 kernel: acpiphp: Slot [23] registered May 17 03:46:15.083718 kernel: acpiphp: Slot [24] registered May 17 03:46:15.083726 kernel: acpiphp: Slot [25] registered May 17 03:46:15.083735 kernel: acpiphp: Slot [26] registered May 17 03:46:15.083743 kernel: acpiphp: Slot [27] registered May 17 03:46:15.083754 kernel: acpiphp: Slot [28] registered May 17 03:46:15.083763 kernel: acpiphp: Slot [29] registered May 17 03:46:15.083771 kernel: acpiphp: Slot [30] registered May 17 03:46:15.083780 kernel: acpiphp: Slot [31] registered May 17 03:46:15.083788 kernel: PCI host bridge to bus 0000:00 May 17 03:46:15.083886 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 03:46:15.083974 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 03:46:15.084057 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 03:46:15.084145 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 03:46:15.085325 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 17 03:46:15.085410 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 03:46:15.085516 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 03:46:15.085615 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 03:46:15.085713 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 17 03:46:15.085809 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 17 03:46:15.085899 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 17 03:46:15.085989 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 17 03:46:15.086080 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 17 03:46:15.086171 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 17 03:46:15.087304 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 17 03:46:15.087399 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 17 03:46:15.087495 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 17 03:46:15.087593 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 17 03:46:15.087685 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 17 03:46:15.087777 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 17 03:46:15.087868 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 17 03:46:15.087958 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 17 03:46:15.088053 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 03:46:15.088153 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 03:46:15.089277 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 17 03:46:15.089382 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 17 03:46:15.089476 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 17 03:46:15.089569 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 17 03:46:15.089671 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 17 03:46:15.089768 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 17 03:46:15.089860 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 17 03:46:15.089955 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 17 03:46:15.090057 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 17 03:46:15.090154 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 17 03:46:15.091305 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 17 03:46:15.091417 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 17 03:46:15.091521 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 17 03:46:15.091618 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 17 03:46:15.091713 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 17 03:46:15.091728 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 03:46:15.091738 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 03:46:15.091748 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 03:46:15.091757 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 03:46:15.091767 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 03:46:15.091780 kernel: iommu: Default domain type: Translated May 17 03:46:15.091789 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 03:46:15.091799 kernel: PCI: Using ACPI for IRQ routing May 17 03:46:15.091808 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 03:46:15.091817 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 03:46:15.091827 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 17 03:46:15.091922 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 17 03:46:15.092017 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 17 03:46:15.092111 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 03:46:15.092129 kernel: vgaarb: loaded May 17 03:46:15.092139 kernel: clocksource: Switched to clocksource kvm-clock May 17 03:46:15.092149 kernel: VFS: Disk quotas dquot_6.6.0 May 17 03:46:15.092159 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 03:46:15.092168 kernel: pnp: PnP ACPI init May 17 03:46:15.092281 kernel: pnp 00:03: [dma 2] May 17 03:46:15.092296 kernel: pnp: PnP ACPI: found 5 devices May 17 03:46:15.092305 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 03:46:15.092318 kernel: NET: Registered PF_INET protocol family May 17 03:46:15.092327 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 03:46:15.092336 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 03:46:15.092345 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 03:46:15.092354 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 03:46:15.092363 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 03:46:15.092372 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 03:46:15.092381 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 03:46:15.092390 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 03:46:15.092401 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 03:46:15.092410 kernel: NET: Registered PF_XDP protocol family May 17 03:46:15.092488 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 03:46:15.092568 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 03:46:15.092649 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 03:46:15.092729 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 17 03:46:15.092808 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 17 03:46:15.096174 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 17 03:46:15.096316 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 03:46:15.096331 kernel: PCI: CLS 0 bytes, default 64 May 17 03:46:15.096340 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 03:46:15.096349 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 17 03:46:15.096358 kernel: Initialise system trusted keyrings May 17 03:46:15.096367 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 03:46:15.096376 kernel: Key type asymmetric registered May 17 03:46:15.096385 kernel: Asymmetric key parser 'x509' registered May 17 03:46:15.096393 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 03:46:15.096406 kernel: io scheduler mq-deadline registered May 17 03:46:15.096415 kernel: io scheduler kyber registered May 17 03:46:15.096424 kernel: io scheduler bfq registered May 17 03:46:15.096433 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 03:46:15.096442 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 17 03:46:15.096452 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 17 03:46:15.096460 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 03:46:15.096469 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 17 03:46:15.096478 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 03:46:15.096489 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 03:46:15.096497 kernel: random: crng init done May 17 03:46:15.096506 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 03:46:15.096515 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 03:46:15.096524 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 03:46:15.096623 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 03:46:15.096638 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 03:46:15.096716 kernel: rtc_cmos 00:04: registered as rtc0 May 17 03:46:15.096802 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T03:46:14 UTC (1747453574) May 17 03:46:15.096881 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 17 03:46:15.096895 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 03:46:15.096904 kernel: NET: Registered PF_INET6 protocol family May 17 03:46:15.096913 kernel: Segment Routing with IPv6 May 17 03:46:15.096922 kernel: In-situ OAM (IOAM) with IPv6 May 17 03:46:15.096930 kernel: NET: Registered PF_PACKET protocol family May 17 03:46:15.096939 kernel: Key type dns_resolver registered May 17 03:46:15.096948 kernel: IPI shorthand broadcast: enabled May 17 03:46:15.096960 kernel: sched_clock: Marking stable (1000008071, 188035764)->(1217919828, -29875993) May 17 03:46:15.096969 kernel: registered taskstats version 1 May 17 03:46:15.096978 kernel: Loading compiled-in X.509 certificates May 17 03:46:15.096987 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 03:46:15.096996 kernel: Key type .fscrypt registered May 17 03:46:15.097004 kernel: Key type fscrypt-provisioning registered May 17 03:46:15.097013 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 03:46:15.097022 kernel: ima: Allocated hash algorithm: sha1 May 17 03:46:15.097032 kernel: ima: No architecture policies found May 17 03:46:15.097041 kernel: clk: Disabling unused clocks May 17 03:46:15.097050 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 03:46:15.097058 kernel: Write protecting the kernel read-only data: 36864k May 17 03:46:15.097067 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 03:46:15.097076 kernel: Run /init as init process May 17 03:46:15.097085 kernel: with arguments: May 17 03:46:15.097093 kernel: /init May 17 03:46:15.097102 kernel: with environment: May 17 03:46:15.097110 kernel: HOME=/ May 17 03:46:15.097120 kernel: TERM=linux May 17 03:46:15.097129 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 03:46:15.097141 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 03:46:15.097152 systemd[1]: Detected virtualization kvm. May 17 03:46:15.097162 systemd[1]: Detected architecture x86-64. May 17 03:46:15.097172 systemd[1]: Running in initrd. May 17 03:46:15.097181 systemd[1]: No hostname configured, using default hostname. May 17 03:46:15.097208 systemd[1]: Hostname set to . May 17 03:46:15.097219 systemd[1]: Initializing machine ID from VM UUID. May 17 03:46:15.097228 systemd[1]: Queued start job for default target initrd.target. May 17 03:46:15.097238 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 03:46:15.097247 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 03:46:15.097258 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 03:46:15.097267 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 03:46:15.097288 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 03:46:15.097299 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 03:46:15.097311 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 03:46:15.097321 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 03:46:15.097330 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 03:46:15.097342 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 03:46:15.097352 systemd[1]: Reached target paths.target - Path Units. May 17 03:46:15.097362 systemd[1]: Reached target slices.target - Slice Units. May 17 03:46:15.097372 systemd[1]: Reached target swap.target - Swaps. May 17 03:46:15.097381 systemd[1]: Reached target timers.target - Timer Units. May 17 03:46:15.097391 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 03:46:15.097401 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 03:46:15.097410 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 03:46:15.097420 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 03:46:15.097433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 03:46:15.097443 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 03:46:15.097453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 03:46:15.097463 systemd[1]: Reached target sockets.target - Socket Units. May 17 03:46:15.097472 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 03:46:15.097482 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 03:46:15.097492 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 03:46:15.097502 systemd[1]: Starting systemd-fsck-usr.service... May 17 03:46:15.097513 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 03:46:15.097523 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 03:46:15.097551 systemd-journald[184]: Collecting audit messages is disabled. May 17 03:46:15.097576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 03:46:15.097588 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 03:46:15.097598 systemd-journald[184]: Journal started May 17 03:46:15.097621 systemd-journald[184]: Runtime Journal (/run/log/journal/73255b111ae14ad7b2c4cad12770c42e) is 8.0M, max 78.3M, 70.3M free. May 17 03:46:15.102249 systemd-modules-load[185]: Inserted module 'overlay' May 17 03:46:15.111370 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 03:46:15.120224 systemd[1]: Started systemd-journald.service - Journal Service. May 17 03:46:15.121136 systemd[1]: Finished systemd-fsck-usr.service. May 17 03:46:15.132399 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 03:46:15.135489 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 03:46:15.143598 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 03:46:15.187292 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 03:46:15.187317 kernel: Bridge firewalling registered May 17 03:46:15.150325 systemd-modules-load[185]: Inserted module 'br_netfilter' May 17 03:46:15.194578 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 03:46:15.195364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 03:46:15.203751 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 03:46:15.205318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 03:46:15.212409 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 03:46:15.214352 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 03:46:15.223842 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 03:46:15.226014 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 03:46:15.235350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 03:46:15.236064 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 03:46:15.237470 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 03:46:15.260112 dracut-cmdline[220]: dracut-dracut-053 May 17 03:46:15.265061 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 03:46:15.270026 systemd-resolved[218]: Positive Trust Anchors: May 17 03:46:15.270042 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 03:46:15.270086 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 03:46:15.272889 systemd-resolved[218]: Defaulting to hostname 'linux'. May 17 03:46:15.273887 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 03:46:15.278051 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 03:46:15.350278 kernel: SCSI subsystem initialized May 17 03:46:15.361254 kernel: Loading iSCSI transport class v2.0-870. May 17 03:46:15.373499 kernel: iscsi: registered transport (tcp) May 17 03:46:15.396319 kernel: iscsi: registered transport (qla4xxx) May 17 03:46:15.396379 kernel: QLogic iSCSI HBA Driver May 17 03:46:15.456428 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 03:46:15.466539 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 03:46:15.519493 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 03:46:15.519618 kernel: device-mapper: uevent: version 1.0.3 May 17 03:46:15.522251 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 03:46:15.569443 kernel: raid6: sse2x4 gen() 13161 MB/s May 17 03:46:15.587247 kernel: raid6: sse2x2 gen() 14630 MB/s May 17 03:46:15.606340 kernel: raid6: sse2x1 gen() 8674 MB/s May 17 03:46:15.606380 kernel: raid6: using algorithm sse2x2 gen() 14630 MB/s May 17 03:46:15.626116 kernel: raid6: .... xor() 9047 MB/s, rmw enabled May 17 03:46:15.626147 kernel: raid6: using ssse3x2 recovery algorithm May 17 03:46:15.649791 kernel: xor: measuring software checksum speed May 17 03:46:15.649851 kernel: prefetch64-sse : 18420 MB/sec May 17 03:46:15.650355 kernel: generic_sse : 16848 MB/sec May 17 03:46:15.653112 kernel: xor: using function: prefetch64-sse (18420 MB/sec) May 17 03:46:15.863254 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 03:46:15.878988 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 03:46:15.885339 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 03:46:15.937048 systemd-udevd[402]: Using default interface naming scheme 'v255'. May 17 03:46:15.941537 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 03:46:15.949405 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 03:46:15.963920 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation May 17 03:46:15.992368 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 03:46:15.999365 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 03:46:16.043299 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 03:46:16.053502 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 03:46:16.075275 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 03:46:16.077803 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 03:46:16.080107 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 03:46:16.081171 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 03:46:16.088426 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 03:46:16.113724 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 03:46:16.135218 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 17 03:46:16.141546 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 17 03:46:16.162293 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 03:46:16.162359 kernel: GPT:17805311 != 20971519 May 17 03:46:16.162372 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 03:46:16.162384 kernel: GPT:17805311 != 20971519 May 17 03:46:16.162394 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 03:46:16.162414 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 03:46:16.167510 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 03:46:16.167637 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 03:46:16.168298 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 03:46:16.172337 kernel: libata version 3.00 loaded. May 17 03:46:16.171933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 03:46:16.172063 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 03:46:16.174056 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 03:46:16.180105 kernel: ata_piix 0000:00:01.1: version 2.13 May 17 03:46:16.179495 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 03:46:16.183491 kernel: scsi host0: ata_piix May 17 03:46:16.190105 kernel: scsi host1: ata_piix May 17 03:46:16.194306 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 17 03:46:16.194334 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 17 03:46:16.229229 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) May 17 03:46:16.229281 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (466) May 17 03:46:16.243305 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 03:46:16.264564 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 03:46:16.265404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 03:46:16.271005 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 03:46:16.271611 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 03:46:16.277891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 03:46:16.288381 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 03:46:16.291422 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 03:46:16.306235 disk-uuid[502]: Primary Header is updated. May 17 03:46:16.306235 disk-uuid[502]: Secondary Entries is updated. May 17 03:46:16.306235 disk-uuid[502]: Secondary Header is updated. May 17 03:46:16.313519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 03:46:16.316287 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 03:46:16.322526 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 03:46:17.338566 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 03:46:17.338671 disk-uuid[507]: The operation has completed successfully. May 17 03:46:17.423083 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 03:46:17.423322 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 03:46:17.456396 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 03:46:17.462112 sh[524]: Success May 17 03:46:17.487239 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 17 03:46:17.565323 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 03:46:17.568797 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 03:46:17.577378 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 03:46:17.619941 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 03:46:17.620013 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 03:46:17.624874 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 03:46:17.630003 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 03:46:17.633823 kernel: BTRFS info (device dm-0): using free space tree May 17 03:46:17.655176 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 03:46:17.657513 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 03:46:17.670682 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 03:46:17.675676 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 03:46:17.693445 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 03:46:17.693518 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 03:46:17.697726 kernel: BTRFS info (device vda6): using free space tree May 17 03:46:17.709256 kernel: BTRFS info (device vda6): auto enabling async discard May 17 03:46:17.730334 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 03:46:17.735831 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 03:46:17.748667 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 03:46:17.757495 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 03:46:17.804054 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 03:46:17.816537 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 03:46:17.834714 systemd-networkd[707]: lo: Link UP May 17 03:46:17.834724 systemd-networkd[707]: lo: Gained carrier May 17 03:46:17.835866 systemd-networkd[707]: Enumeration completed May 17 03:46:17.836300 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 03:46:17.836303 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 03:46:17.837232 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 03:46:17.837441 systemd-networkd[707]: eth0: Link UP May 17 03:46:17.837444 systemd-networkd[707]: eth0: Gained carrier May 17 03:46:17.837452 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 03:46:17.838234 systemd[1]: Reached target network.target - Network. May 17 03:46:17.851259 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.46/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 17 03:46:17.910467 ignition[634]: Ignition 2.19.0 May 17 03:46:17.911304 ignition[634]: Stage: fetch-offline May 17 03:46:17.912130 ignition[634]: no configs at "/usr/lib/ignition/base.d" May 17 03:46:17.912144 ignition[634]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 17 03:46:17.913873 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 03:46:17.912300 ignition[634]: parsed url from cmdline: "" May 17 03:46:17.912304 ignition[634]: no config URL provided May 17 03:46:17.912311 ignition[634]: reading system config file "/usr/lib/ignition/user.ign" May 17 03:46:17.912323 ignition[634]: no config at "/usr/lib/ignition/user.ign" May 17 03:46:17.912330 ignition[634]: failed to fetch config: resource requires networking May 17 03:46:17.912588 ignition[634]: Ignition finished successfully May 17 03:46:17.927128 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 03:46:17.940024 ignition[716]: Ignition 2.19.0 May 17 03:46:17.940041 ignition[716]: Stage: fetch May 17 03:46:17.941439 ignition[716]: no configs at "/usr/lib/ignition/base.d" May 17 03:46:17.941456 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 17 03:46:17.941569 ignition[716]: parsed url from cmdline: "" May 17 03:46:17.941575 ignition[716]: no config URL provided May 17 03:46:17.941580 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" May 17 03:46:17.941590 ignition[716]: no config at "/usr/lib/ignition/user.ign" May 17 03:46:17.941722 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 17 03:46:17.943076 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 17 03:46:17.943263 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 17 03:46:18.277956 ignition[716]: GET result: OK May 17 03:46:18.278163 ignition[716]: parsing config with SHA512: acd507b9de6a737a35c3b97899b7f4f5b19dabffb5852a5cf3388d0312662783e82b7011fc33b977dd5e092e0e38c34ab4a15414265a8505deaff54365113c97 May 17 03:46:18.287647 unknown[716]: fetched base config from "system" May 17 03:46:18.287683 unknown[716]: fetched base config from "system" May 17 03:46:18.288568 ignition[716]: fetch: fetch complete May 17 03:46:18.287697 unknown[716]: fetched user config from "openstack" May 17 03:46:18.288581 ignition[716]: fetch: fetch passed May 17 03:46:18.291755 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 03:46:18.288666 ignition[716]: Ignition finished successfully May 17 03:46:18.302518 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 03:46:18.337461 ignition[722]: Ignition 2.19.0 May 17 03:46:18.337486 ignition[722]: Stage: kargs May 17 03:46:18.337874 ignition[722]: no configs at "/usr/lib/ignition/base.d" May 17 03:46:18.337898 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 17 03:46:18.342616 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 03:46:18.340083 ignition[722]: kargs: kargs passed May 17 03:46:18.340184 ignition[722]: Ignition finished successfully May 17 03:46:18.356057 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 03:46:18.382741 ignition[728]: Ignition 2.19.0 May 17 03:46:18.382763 ignition[728]: Stage: disks May 17 03:46:18.383104 ignition[728]: no configs at "/usr/lib/ignition/base.d" May 17 03:46:18.383126 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 17 03:46:18.386605 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 03:46:18.384972 ignition[728]: disks: disks passed May 17 03:46:18.388856 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 03:46:18.385045 ignition[728]: Ignition finished successfully May 17 03:46:18.391032 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 03:46:18.393430 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 03:46:18.395539 systemd[1]: Reached target sysinit.target - System Initialization. May 17 03:46:18.398182 systemd[1]: Reached target basic.target - Basic System. May 17 03:46:18.407533 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 03:46:18.437389 systemd-fsck[736]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 17 03:46:18.451357 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 03:46:18.460757 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 03:46:18.616754 kernel: EXT4-fs (vda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 03:46:18.617169 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 03:46:18.618309 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 03:46:18.626437 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 03:46:18.633397 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 03:46:18.650733 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (744) May 17 03:46:18.650784 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 03:46:18.650812 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 03:46:18.650838 kernel: BTRFS info (device vda6): using free space tree May 17 03:46:18.651424 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 03:46:18.658559 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 17 03:46:18.659567 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 03:46:18.672799 kernel: BTRFS info (device vda6): auto enabling async discard May 17 03:46:18.659602 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 03:46:18.667192 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 03:46:18.674489 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 03:46:18.704399 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 03:46:18.823257 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory May 17 03:46:18.837031 initrd-setup-root[781]: cut: /sysroot/etc/group: No such file or directory May 17 03:46:18.854242 initrd-setup-root[788]: cut: /sysroot/etc/shadow: No such file or directory May 17 03:46:18.866056 initrd-setup-root[795]: cut: /sysroot/etc/gshadow: No such file or directory May 17 03:46:19.021701 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 03:46:19.030377 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 03:46:19.033624 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 03:46:19.052491 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 03:46:19.058928 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 03:46:19.094761 ignition[863]: INFO : Ignition 2.19.0 May 17 03:46:19.094761 ignition[863]: INFO : Stage: mount May 17 03:46:19.100270 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 03:46:19.100270 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 17 03:46:19.100270 ignition[863]: INFO : mount: mount passed May 17 03:46:19.100270 ignition[863]: INFO : Ignition finished successfully May 17 03:46:19.098289 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 03:46:19.107317 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 03:46:19.316833 systemd-networkd[707]: eth0: Gained IPv6LL May 17 03:46:25.902772 coreos-metadata[752]: May 17 03:46:25.902 WARN failed to locate config-drive, using the metadata service API instead May 17 03:46:25.943237 coreos-metadata[752]: May 17 03:46:25.943 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 17 03:46:25.956832 coreos-metadata[752]: May 17 03:46:25.956 INFO Fetch successful May 17 03:46:25.956832 coreos-metadata[752]: May 17 03:46:25.956 INFO wrote hostname ci-4081-3-3-n-2f0bbd4ac2.novalocal to /sysroot/etc/hostname May 17 03:46:25.960560 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 17 03:46:25.960771 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 17 03:46:25.972499 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 03:46:25.998729 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 03:46:26.017288 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (879) May 17 03:46:26.026296 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 03:46:26.026401 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 03:46:26.030657 kernel: BTRFS info (device vda6): using free space tree May 17 03:46:26.043316 kernel: BTRFS info (device vda6): auto enabling async discard May 17 03:46:26.048813 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 03:46:26.089060 ignition[897]: INFO : Ignition 2.19.0 May 17 03:46:26.090291 ignition[897]: INFO : Stage: files May 17 03:46:26.090291 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 03:46:26.090291 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 17 03:46:26.093571 ignition[897]: DEBUG : files: compiled without relabeling support, skipping May 17 03:46:26.095489 ignition[897]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 03:46:26.095489 ignition[897]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 03:46:26.102762 ignition[897]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 03:46:26.103816 ignition[897]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 03:46:26.105194 unknown[897]: wrote ssh authorized keys file for user: core May 17 03:46:26.105972 ignition[897]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 03:46:26.109232 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 03:46:26.110253 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 17 03:46:26.175662 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 03:46:26.505734 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 03:46:26.505734 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 03:46:26.511011 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 17 03:46:27.367333 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 03:46:30.264249 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 03:46:30.264249 ignition[897]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 03:46:30.268371 ignition[897]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 03:46:30.268371 ignition[897]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 03:46:30.268371 ignition[897]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 03:46:30.268371 ignition[897]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 03:46:30.268371 ignition[897]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 03:46:30.268371 ignition[897]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 03:46:30.268371 ignition[897]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 03:46:30.268371 ignition[897]: INFO : files: files passed May 17 03:46:30.268371 ignition[897]: INFO : Ignition finished successfully May 17 03:46:30.269360 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 03:46:30.284602 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 03:46:30.296343 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 03:46:30.300885 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 03:46:30.301033 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 03:46:30.328512 initrd-setup-root-after-ignition[925]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 03:46:30.328512 initrd-setup-root-after-ignition[925]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 03:46:30.333959 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 03:46:30.333366 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 03:46:30.336376 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 03:46:30.345481 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 03:46:30.389069 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 03:46:30.389398 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 03:46:30.392859 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 03:46:30.401839 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 03:46:30.404446 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 03:46:30.412445 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 03:46:30.446513 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 03:46:30.454420 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 03:46:30.494960 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 03:46:30.496715 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 03:46:30.499965 systemd[1]: Stopped target timers.target - Timer Units. May 17 03:46:30.502850 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 03:46:30.503134 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 03:46:30.506422 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 03:46:30.508481 systemd[1]: Stopped target basic.target - Basic System. May 17 03:46:30.511291 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 03:46:30.513916 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 03:46:30.516618 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 03:46:30.519648 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 03:46:30.522662 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 03:46:30.525738 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 03:46:30.528681 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 03:46:30.531719 systemd[1]: Stopped target swap.target - Swaps. May 17 03:46:30.534510 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 03:46:30.534781 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 03:46:30.537969 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 03:46:30.539864 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 03:46:30.542468 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 03:46:30.542715 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 03:46:30.545647 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 03:46:30.545978 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 03:46:30.549782 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 03:46:30.550082 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 03:46:30.552012 systemd[1]: ignition-files.service: Deactivated successfully. May 17 03:46:30.552431 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 03:46:30.562770 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 03:46:30.581043 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 03:46:30.583919 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 03:46:30.584288 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 03:46:30.587612 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 03:46:30.587905 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 03:46:30.604502 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 03:46:30.606059 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 03:46:30.611678 ignition[949]: INFO : Ignition 2.19.0 May 17 03:46:30.611678 ignition[949]: INFO : Stage: umount May 17 03:46:30.611678 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 03:46:30.611678 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 17 03:46:30.611678 ignition[949]: INFO : umount: umount passed May 17 03:46:30.611678 ignition[949]: INFO : Ignition finished successfully May 17 03:46:30.610325 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 03:46:30.610514 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 03:46:30.615146 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 03:46:30.618480 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 03:46:30.619616 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 03:46:30.619691 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 03:46:30.623115 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 03:46:30.623155 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 03:46:30.623690 systemd[1]: Stopped target network.target - Network. May 17 03:46:30.624133 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 03:46:30.624178 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 03:46:30.624801 systemd[1]: Stopped target paths.target - Path Units. May 17 03:46:30.625807 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 03:46:30.629406 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 03:46:30.630279 systemd[1]: Stopped target slices.target - Slice Units. May 17 03:46:30.631440 systemd[1]: Stopped target sockets.target - Socket Units. May 17 03:46:30.632635 systemd[1]: iscsid.socket: Deactivated successfully. May 17 03:46:30.632672 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 03:46:30.633648 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 03:46:30.633683 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 03:46:30.634856 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 03:46:30.634898 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 03:46:30.636087 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 03:46:30.636129 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 03:46:30.637246 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 03:46:30.638449 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 03:46:30.640344 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 03:46:30.640818 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 03:46:30.640904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 03:46:30.641237 systemd-networkd[707]: eth0: DHCPv6 lease lost May 17 03:46:30.642143 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 03:46:30.642250 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 03:46:30.644285 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 03:46:30.644379 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 03:46:30.645602 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 03:46:30.645667 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 03:46:30.658052 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 03:46:30.658994 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 03:46:30.659485 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 03:46:30.662444 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 03:46:30.664452 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 03:46:30.664609 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 03:46:30.670462 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 03:46:30.670595 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 03:46:30.676832 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 03:46:30.676888 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 03:46:30.677664 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 03:46:30.677698 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 03:46:30.678243 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 03:46:30.678295 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 03:46:30.679351 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 03:46:30.679391 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 03:46:30.680548 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 03:46:30.680588 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 03:46:30.684340 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 03:46:30.685017 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 03:46:30.685066 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 03:46:30.686100 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 03:46:30.686142 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 03:46:30.687151 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 03:46:30.687210 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 03:46:30.689116 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 03:46:30.689158 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 03:46:30.690396 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 03:46:30.690435 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 03:46:30.691448 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 03:46:30.691487 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 03:46:30.692622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 03:46:30.692661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 03:46:30.696440 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 03:46:30.696537 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 03:46:30.700167 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 03:46:30.700390 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 03:46:30.702005 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 03:46:30.709418 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 03:46:30.715585 systemd[1]: Switching root. May 17 03:46:30.743101 systemd-journald[184]: Journal stopped May 17 03:46:32.328188 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 17 03:46:32.328289 kernel: SELinux: policy capability network_peer_controls=1 May 17 03:46:32.328310 kernel: SELinux: policy capability open_perms=1 May 17 03:46:32.328326 kernel: SELinux: policy capability extended_socket_class=1 May 17 03:46:32.328337 kernel: SELinux: policy capability always_check_network=0 May 17 03:46:32.328352 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 03:46:32.328364 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 03:46:32.328375 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 03:46:32.328385 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 03:46:32.328397 kernel: audit: type=1403 audit(1747453591.214:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 03:46:32.328409 systemd[1]: Successfully loaded SELinux policy in 80.184ms. May 17 03:46:32.328426 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.998ms. May 17 03:46:32.328440 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 03:46:32.328455 systemd[1]: Detected virtualization kvm. May 17 03:46:32.328467 systemd[1]: Detected architecture x86-64. May 17 03:46:32.328479 systemd[1]: Detected first boot. May 17 03:46:32.328494 systemd[1]: Hostname set to . May 17 03:46:32.328507 systemd[1]: Initializing machine ID from VM UUID. May 17 03:46:32.328519 zram_generator::config[991]: No configuration found. May 17 03:46:32.328533 systemd[1]: Populated /etc with preset unit settings. May 17 03:46:32.328545 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 03:46:32.328559 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 03:46:32.328573 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 03:46:32.328585 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 03:46:32.328598 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 03:46:32.328610 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 03:46:32.328622 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 03:46:32.328634 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 03:46:32.328646 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 03:46:32.328660 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 03:46:32.328672 systemd[1]: Created slice user.slice - User and Session Slice. May 17 03:46:32.328685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 03:46:32.328697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 03:46:32.328709 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 03:46:32.328721 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 03:46:32.328733 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 03:46:32.328749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 03:46:32.328761 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 03:46:32.328775 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 03:46:32.328788 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 03:46:32.328800 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 03:46:32.328812 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 03:46:32.328824 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 03:46:32.328837 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 03:46:32.328850 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 03:46:32.328864 systemd[1]: Reached target slices.target - Slice Units. May 17 03:46:32.328876 systemd[1]: Reached target swap.target - Swaps. May 17 03:46:32.328888 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 03:46:32.328900 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 03:46:32.328913 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 03:46:32.328925 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 03:46:32.328937 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 03:46:32.328950 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 03:46:32.328962 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 03:46:32.328976 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 03:46:32.328988 systemd[1]: Mounting media.mount - External Media Directory... May 17 03:46:32.329000 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 03:46:32.329012 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 03:46:32.329025 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 03:46:32.329040 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 03:46:32.329053 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 03:46:32.329066 systemd[1]: Reached target machines.target - Containers. May 17 03:46:32.329079 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 03:46:32.329092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 03:46:32.329106 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 03:46:32.329118 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 03:46:32.329130 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 03:46:32.329142 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 03:46:32.329154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 03:46:32.329166 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 03:46:32.329178 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 03:46:32.335592 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 03:46:32.335619 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 03:46:32.335632 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 03:46:32.335643 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 03:46:32.335655 systemd[1]: Stopped systemd-fsck-usr.service. May 17 03:46:32.335667 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 03:46:32.335679 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 03:46:32.335691 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 03:46:32.335703 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 03:46:32.335719 kernel: loop: module loaded May 17 03:46:32.335731 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 03:46:32.335744 systemd[1]: verity-setup.service: Deactivated successfully. May 17 03:46:32.335756 systemd[1]: Stopped verity-setup.service. May 17 03:46:32.335768 kernel: fuse: init (API version 7.39) May 17 03:46:32.335780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 03:46:32.335808 systemd-journald[1094]: Collecting audit messages is disabled. May 17 03:46:32.335835 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 03:46:32.335847 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 03:46:32.335859 systemd[1]: Mounted media.mount - External Media Directory. May 17 03:46:32.335871 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 03:46:32.335883 systemd-journald[1094]: Journal started May 17 03:46:32.335911 systemd-journald[1094]: Runtime Journal (/run/log/journal/73255b111ae14ad7b2c4cad12770c42e) is 8.0M, max 78.3M, 70.3M free. May 17 03:46:31.978378 systemd[1]: Queued start job for default target multi-user.target. May 17 03:46:32.001276 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 03:46:32.001656 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 03:46:32.341887 systemd[1]: Started systemd-journald.service - Journal Service. May 17 03:46:32.340038 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 03:46:32.340642 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 03:46:32.341353 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 03:46:32.342701 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 03:46:32.343464 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 03:46:32.343577 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 03:46:32.344513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 03:46:32.344656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 03:46:32.345563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 03:46:32.345768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 03:46:32.346602 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 03:46:32.346781 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 03:46:32.347587 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 03:46:32.347769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 03:46:32.348943 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 03:46:32.349745 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 03:46:32.350873 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 03:46:32.354674 kernel: ACPI: bus type drm_connector registered May 17 03:46:32.355365 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 03:46:32.355665 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 03:46:32.363820 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 03:46:32.370165 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 03:46:32.374285 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 03:46:32.376307 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 03:46:32.376341 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 03:46:32.378580 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 03:46:32.385331 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 03:46:32.391878 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 03:46:32.393667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 03:46:32.400350 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 03:46:32.403086 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 03:46:32.405561 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 03:46:32.415002 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 03:46:32.416068 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 03:46:32.426399 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 03:46:32.429597 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 03:46:32.431069 systemd-journald[1094]: Time spent on flushing to /var/log/journal/73255b111ae14ad7b2c4cad12770c42e is 57.999ms for 941 entries. May 17 03:46:32.431069 systemd-journald[1094]: System Journal (/var/log/journal/73255b111ae14ad7b2c4cad12770c42e) is 8.0M, max 584.8M, 576.8M free. May 17 03:46:32.510118 systemd-journald[1094]: Received client request to flush runtime journal. May 17 03:46:32.510175 kernel: loop0: detected capacity change from 0 to 140768 May 17 03:46:32.442399 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 03:46:32.446379 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 03:46:32.447403 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 03:46:32.448674 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 03:46:32.451573 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 03:46:32.464354 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 03:46:32.465349 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 03:46:32.466760 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 03:46:32.476370 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 03:46:32.495389 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 03:46:32.505635 udevadm[1131]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 03:46:32.512457 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 03:46:32.552690 systemd-tmpfiles[1125]: ACLs are not supported, ignoring. May 17 03:46:32.552710 systemd-tmpfiles[1125]: ACLs are not supported, ignoring. May 17 03:46:32.561579 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 03:46:32.568352 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 03:46:32.569586 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 03:46:32.570824 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 03:46:32.600363 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 03:46:32.622710 kernel: loop1: detected capacity change from 0 to 142488 May 17 03:46:32.661446 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 03:46:32.672543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 03:46:32.696838 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. May 17 03:46:32.698368 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. May 17 03:46:32.709593 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 03:46:32.722233 kernel: loop2: detected capacity change from 0 to 229808 May 17 03:46:32.787757 kernel: loop3: detected capacity change from 0 to 8 May 17 03:46:32.818244 kernel: loop4: detected capacity change from 0 to 140768 May 17 03:46:32.882965 kernel: loop5: detected capacity change from 0 to 142488 May 17 03:46:32.957332 kernel: loop6: detected capacity change from 0 to 229808 May 17 03:46:33.025220 kernel: loop7: detected capacity change from 0 to 8 May 17 03:46:33.024907 (sd-merge)[1153]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 17 03:46:33.025422 (sd-merge)[1153]: Merged extensions into '/usr'. May 17 03:46:33.032470 systemd[1]: Reloading requested from client PID 1124 ('systemd-sysext') (unit systemd-sysext.service)... May 17 03:46:33.032490 systemd[1]: Reloading... May 17 03:46:33.128367 zram_generator::config[1178]: No configuration found. May 17 03:46:33.339364 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 03:46:33.397642 systemd[1]: Reloading finished in 364 ms. May 17 03:46:33.425676 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 03:46:33.435457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 03:46:33.438255 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 03:46:33.441485 systemd[1]: Starting ensure-sysext.service... May 17 03:46:33.446413 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 03:46:33.465789 systemd-udevd[1234]: Using default interface naming scheme 'v255'. May 17 03:46:33.473141 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... May 17 03:46:33.473161 systemd[1]: Reloading... May 17 03:46:33.509581 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 03:46:33.509973 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 03:46:33.510983 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 03:46:33.515869 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. May 17 03:46:33.515960 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. May 17 03:46:33.525153 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. May 17 03:46:33.525166 systemd-tmpfiles[1237]: Skipping /boot May 17 03:46:33.533238 ldconfig[1119]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 03:46:33.556497 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. May 17 03:46:33.556509 systemd-tmpfiles[1237]: Skipping /boot May 17 03:46:33.557226 zram_generator::config[1276]: No configuration found. May 17 03:46:33.631230 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1260) May 17 03:46:33.716821 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 03:46:33.760241 kernel: ACPI: button: Power Button [PWRF] May 17 03:46:33.768286 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 17 03:46:33.784227 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 03:46:33.797595 kernel: mousedev: PS/2 mouse device common for all mice May 17 03:46:33.818589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 03:46:33.833086 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 17 03:46:33.833162 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 17 03:46:33.837506 kernel: Console: switching to colour dummy device 80x25 May 17 03:46:33.839257 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 03:46:33.839298 kernel: [drm] features: -context_init May 17 03:46:33.840646 kernel: [drm] number of scanouts: 1 May 17 03:46:33.841263 kernel: [drm] number of cap sets: 0 May 17 03:46:33.844240 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 17 03:46:33.851794 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 17 03:46:33.851914 kernel: Console: switching to colour frame buffer device 160x50 May 17 03:46:33.860230 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 03:46:33.905061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 03:46:33.908470 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 03:46:33.908696 systemd[1]: Reloading finished in 435 ms. May 17 03:46:33.927863 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 03:46:33.928745 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 03:46:33.936779 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 03:46:33.981287 systemd[1]: Finished ensure-sysext.service. May 17 03:46:33.984949 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 03:46:33.987313 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 03:46:33.992390 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 03:46:34.003416 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 03:46:34.004906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 03:46:34.006097 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 03:46:34.008541 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 03:46:34.017558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 03:46:34.020464 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 03:46:34.023562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 03:46:34.023852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 03:46:34.025930 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 03:46:34.033926 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 03:46:34.044340 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 03:46:34.047408 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 03:46:34.049372 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 03:46:34.058469 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 03:46:34.063267 lvm[1360]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 03:46:34.063367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 03:46:34.063460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 03:46:34.069427 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 03:46:34.071660 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 03:46:34.072015 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 03:46:34.110041 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 03:46:34.111017 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 03:46:34.114331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 03:46:34.115568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 03:46:34.131453 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 03:46:34.133458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 03:46:34.135874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 03:46:34.136783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 03:46:34.136929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 03:46:34.142750 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 03:46:34.146244 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 03:46:34.156693 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 03:46:34.156856 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 03:46:34.166075 lvm[1385]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 03:46:34.166571 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 03:46:34.171682 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 03:46:34.177272 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 03:46:34.183495 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 03:46:34.186262 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 03:46:34.195679 augenrules[1405]: No rules May 17 03:46:34.198615 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 03:46:34.218547 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 03:46:34.233425 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 03:46:34.294947 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 03:46:34.300854 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 03:46:34.305119 systemd[1]: Reached target time-set.target - System Time Set. May 17 03:46:34.307922 systemd-networkd[1368]: lo: Link UP May 17 03:46:34.307927 systemd-networkd[1368]: lo: Gained carrier May 17 03:46:34.309579 systemd-networkd[1368]: Enumeration completed May 17 03:46:34.310239 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 03:46:34.310305 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 03:46:34.310954 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 03:46:34.311190 systemd-networkd[1368]: eth0: Link UP May 17 03:46:34.311260 systemd-networkd[1368]: eth0: Gained carrier May 17 03:46:34.311312 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 03:46:34.320465 systemd-resolved[1369]: Positive Trust Anchors: May 17 03:46:34.320484 systemd-resolved[1369]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 03:46:34.320527 systemd-resolved[1369]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 03:46:34.321286 systemd-networkd[1368]: eth0: DHCPv4 address 172.24.4.46/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 17 03:46:34.321593 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 03:46:34.322781 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. May 17 03:46:34.333830 systemd-resolved[1369]: Using system hostname 'ci-4081-3-3-n-2f0bbd4ac2.novalocal'. May 17 03:46:34.335603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 03:46:34.337673 systemd[1]: Reached target network.target - Network. May 17 03:46:34.339766 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 03:46:34.341892 systemd[1]: Reached target sysinit.target - System Initialization. May 17 03:46:34.344244 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 03:46:34.346401 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 03:46:34.348750 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 03:46:34.351109 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 03:46:34.353334 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 03:46:34.355568 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 03:46:34.355598 systemd[1]: Reached target paths.target - Path Units. May 17 03:46:34.357751 systemd[1]: Reached target timers.target - Timer Units. May 17 03:46:34.361309 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 03:46:34.364978 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 03:46:34.373021 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 03:46:34.375161 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 03:46:34.378856 systemd[1]: Reached target sockets.target - Socket Units. May 17 03:46:34.379557 systemd[1]: Reached target basic.target - Basic System. May 17 03:46:34.380146 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 03:46:34.380180 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 03:46:34.389301 systemd[1]: Starting containerd.service - containerd container runtime... May 17 03:46:34.392299 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 03:46:34.403467 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 03:46:34.409899 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 03:46:34.418378 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 03:46:34.419429 jq[1428]: false May 17 03:46:34.421094 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 03:46:34.427468 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 03:46:34.433900 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 03:46:34.442474 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 03:46:34.452694 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 03:46:34.454251 extend-filesystems[1429]: Found loop4 May 17 03:46:34.459475 extend-filesystems[1429]: Found loop5 May 17 03:46:34.459475 extend-filesystems[1429]: Found loop6 May 17 03:46:34.459475 extend-filesystems[1429]: Found loop7 May 17 03:46:34.459475 extend-filesystems[1429]: Found vda May 17 03:46:34.459475 extend-filesystems[1429]: Found vda1 May 17 03:46:34.459475 extend-filesystems[1429]: Found vda2 May 17 03:46:34.459475 extend-filesystems[1429]: Found vda3 May 17 03:46:34.459475 extend-filesystems[1429]: Found usr May 17 03:46:34.459475 extend-filesystems[1429]: Found vda4 May 17 03:46:34.459475 extend-filesystems[1429]: Found vda6 May 17 03:46:34.459475 extend-filesystems[1429]: Found vda7 May 17 03:46:34.459475 extend-filesystems[1429]: Found vda9 May 17 03:46:34.459475 extend-filesystems[1429]: Checking size of /dev/vda9 May 17 03:46:34.538931 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 17 03:46:34.504703 dbus-daemon[1427]: [system] SELinux support is enabled May 17 03:46:34.560006 extend-filesystems[1429]: Resized partition /dev/vda9 May 17 03:46:34.577938 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1281) May 17 03:46:34.470409 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 03:46:34.640354 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 17 03:46:34.640409 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) May 17 03:46:34.488026 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 03:46:34.492219 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 03:46:34.498580 systemd[1]: Starting update-engine.service - Update Engine... May 17 03:46:34.650734 update_engine[1446]: I20250517 03:46:34.587755 1446 main.cc:92] Flatcar Update Engine starting May 17 03:46:34.650734 update_engine[1446]: I20250517 03:46:34.596781 1446 update_check_scheduler.cc:74] Next update check in 3m45s May 17 03:46:34.521603 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 03:46:34.651096 jq[1450]: true May 17 03:46:34.524417 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 03:46:34.658237 jq[1454]: true May 17 03:46:34.541841 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 03:46:34.542046 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 03:46:34.542478 systemd[1]: motdgen.service: Deactivated successfully. May 17 03:46:34.542628 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 03:46:34.553929 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 03:46:34.555317 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 03:46:34.582742 systemd-logind[1443]: New seat seat0. May 17 03:46:34.621061 systemd[1]: Started update-engine.service - Update Engine. May 17 03:46:34.621526 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 03:46:34.630125 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 03:46:34.632809 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 03:46:34.632841 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 03:46:34.634769 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 03:46:34.634788 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 03:46:34.644306 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) May 17 03:46:34.644324 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 03:46:34.646546 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 03:46:34.651461 systemd[1]: Started systemd-logind.service - User Login Management. May 17 03:46:34.664783 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 03:46:34.664783 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 03:46:34.664783 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 17 03:46:34.677418 extend-filesystems[1429]: Resized filesystem in /dev/vda9 May 17 03:46:34.670516 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 03:46:34.672321 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 03:46:34.684783 tar[1453]: linux-amd64/LICENSE May 17 03:46:34.684783 tar[1453]: linux-amd64/helm May 17 03:46:34.734723 bash[1476]: Updated "/home/core/.ssh/authorized_keys" May 17 03:46:34.734761 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 03:46:34.752529 systemd[1]: Starting sshkeys.service... May 17 03:46:34.797446 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 03:46:34.815606 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 03:46:34.903268 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 03:46:35.094286 containerd[1462]: time="2025-05-17T03:46:35.092817023Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 03:46:35.161700 containerd[1462]: time="2025-05-17T03:46:35.161474550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 03:46:35.163257 containerd[1462]: time="2025-05-17T03:46:35.163172504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 03:46:35.163257 containerd[1462]: time="2025-05-17T03:46:35.163250631Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 03:46:35.163344 containerd[1462]: time="2025-05-17T03:46:35.163279205Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 03:46:35.163495 containerd[1462]: time="2025-05-17T03:46:35.163462779Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 03:46:35.163495 containerd[1462]: time="2025-05-17T03:46:35.163491723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 03:46:35.163587 containerd[1462]: time="2025-05-17T03:46:35.163562626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 03:46:35.163619 containerd[1462]: time="2025-05-17T03:46:35.163587282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 03:46:35.163808 containerd[1462]: time="2025-05-17T03:46:35.163773021Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 03:46:35.163808 containerd[1462]: time="2025-05-17T03:46:35.163799080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 03:46:35.163858 containerd[1462]: time="2025-05-17T03:46:35.163814599Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 03:46:35.163858 containerd[1462]: time="2025-05-17T03:46:35.163828335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 03:46:35.163934 containerd[1462]: time="2025-05-17T03:46:35.163912392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 03:46:35.164163 containerd[1462]: time="2025-05-17T03:46:35.164132765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 03:46:35.165668 containerd[1462]: time="2025-05-17T03:46:35.165295156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 03:46:35.165668 containerd[1462]: time="2025-05-17T03:46:35.165322908Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 03:46:35.165668 containerd[1462]: time="2025-05-17T03:46:35.165471296Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 03:46:35.165668 containerd[1462]: time="2025-05-17T03:46:35.165528022Z" level=info msg="metadata content store policy set" policy=shared May 17 03:46:35.173772 containerd[1462]: time="2025-05-17T03:46:35.173753479Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 03:46:35.173859 containerd[1462]: time="2025-05-17T03:46:35.173844450Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 03:46:35.173964 containerd[1462]: time="2025-05-17T03:46:35.173948405Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 03:46:35.174070 containerd[1462]: time="2025-05-17T03:46:35.174055526Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 03:46:35.174139 containerd[1462]: time="2025-05-17T03:46:35.174125397Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 03:46:35.174347 containerd[1462]: time="2025-05-17T03:46:35.174328768Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.175903081Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176042162Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176066617Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176082808Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176106382Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176127261Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176150214Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176173438Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176210167Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176231276Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176247527Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176265350Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176292822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178220 containerd[1462]: time="2025-05-17T03:46:35.176313370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176332967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176353235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176372150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176387810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176406284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176425881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176454144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176477789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176495933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176513275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176527612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176554973Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176584459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176600509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178559 containerd[1462]: time="2025-05-17T03:46:35.176619134Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 03:46:35.178841 containerd[1462]: time="2025-05-17T03:46:35.176671682Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 03:46:35.178841 containerd[1462]: time="2025-05-17T03:46:35.176696559Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 03:46:35.178841 containerd[1462]: time="2025-05-17T03:46:35.176709754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 03:46:35.178841 containerd[1462]: time="2025-05-17T03:46:35.176728549Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 03:46:35.178841 containerd[1462]: time="2025-05-17T03:46:35.176745090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 03:46:35.178841 containerd[1462]: time="2025-05-17T03:46:35.176763935Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 03:46:35.178841 containerd[1462]: time="2025-05-17T03:46:35.176784153Z" level=info msg="NRI interface is disabled by configuration." May 17 03:46:35.178841 containerd[1462]: time="2025-05-17T03:46:35.176795635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 03:46:35.179000 containerd[1462]: time="2025-05-17T03:46:35.177114252Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 03:46:35.179000 containerd[1462]: time="2025-05-17T03:46:35.177192208Z" level=info msg="Connect containerd service" May 17 03:46:35.179000 containerd[1462]: time="2025-05-17T03:46:35.177267049Z" level=info msg="using legacy CRI server" May 17 03:46:35.179000 containerd[1462]: time="2025-05-17T03:46:35.177276306Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 03:46:35.179000 containerd[1462]: time="2025-05-17T03:46:35.177383808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 03:46:35.180913 containerd[1462]: time="2025-05-17T03:46:35.180888361Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 03:46:35.181154 containerd[1462]: time="2025-05-17T03:46:35.181124924Z" level=info msg="Start subscribing containerd event" May 17 03:46:35.181301 containerd[1462]: time="2025-05-17T03:46:35.181285255Z" level=info msg="Start recovering state" May 17 03:46:35.181404 containerd[1462]: time="2025-05-17T03:46:35.181390502Z" level=info msg="Start event monitor" May 17 03:46:35.181458 containerd[1462]: time="2025-05-17T03:46:35.181446658Z" level=info msg="Start snapshots syncer" May 17 03:46:35.181508 containerd[1462]: time="2025-05-17T03:46:35.181496381Z" level=info msg="Start cni network conf syncer for default" May 17 03:46:35.181555 containerd[1462]: time="2025-05-17T03:46:35.181543860Z" level=info msg="Start streaming server" May 17 03:46:35.182070 containerd[1462]: time="2025-05-17T03:46:35.181978235Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 03:46:35.182070 containerd[1462]: time="2025-05-17T03:46:35.182036935Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 03:46:35.184347 containerd[1462]: time="2025-05-17T03:46:35.183261101Z" level=info msg="containerd successfully booted in 0.091828s" May 17 03:46:35.183349 systemd[1]: Started containerd.service - containerd container runtime. May 17 03:46:35.403020 tar[1453]: linux-amd64/README.md May 17 03:46:35.419187 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 03:46:35.421233 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 03:46:35.443431 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 03:46:35.449249 systemd-networkd[1368]: eth0: Gained IPv6LL May 17 03:46:35.449853 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. May 17 03:46:35.456870 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 03:46:35.462915 systemd[1]: Started sshd@0-172.24.4.46:22-172.24.4.1:59222.service - OpenSSH per-connection server daemon (172.24.4.1:59222). May 17 03:46:35.467132 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 03:46:35.470120 systemd[1]: issuegen.service: Deactivated successfully. May 17 03:46:35.470874 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 03:46:35.476542 systemd[1]: Reached target network-online.target - Network is Online. May 17 03:46:35.491789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 03:46:35.498676 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 03:46:35.504804 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 03:46:35.519986 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 03:46:35.528432 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 03:46:35.537691 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 03:46:35.540924 systemd[1]: Reached target getty.target - Login Prompts. May 17 03:46:35.545740 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 03:46:36.353381 sshd[1515]: Accepted publickey for core from 172.24.4.1 port 59222 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:46:36.359181 sshd[1515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:46:36.390641 systemd-logind[1443]: New session 1 of user core. May 17 03:46:36.397049 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 03:46:36.418731 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 03:46:36.448989 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 03:46:36.462620 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 03:46:36.475995 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 03:46:36.598395 systemd[1538]: Queued start job for default target default.target. May 17 03:46:36.605257 systemd[1538]: Created slice app.slice - User Application Slice. May 17 03:46:36.605498 systemd[1538]: Reached target paths.target - Paths. May 17 03:46:36.605590 systemd[1538]: Reached target timers.target - Timers. May 17 03:46:36.609324 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 03:46:36.619371 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 03:46:36.619993 systemd[1538]: Reached target sockets.target - Sockets. May 17 03:46:36.620010 systemd[1538]: Reached target basic.target - Basic System. May 17 03:46:36.620045 systemd[1538]: Reached target default.target - Main User Target. May 17 03:46:36.620072 systemd[1538]: Startup finished in 137ms. May 17 03:46:36.620342 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 03:46:36.626669 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 03:46:37.110994 systemd[1]: Started sshd@1-172.24.4.46:22-172.24.4.1:59238.service - OpenSSH per-connection server daemon (172.24.4.1:59238). May 17 03:46:37.670498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 03:46:37.689172 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 03:46:38.771929 sshd[1549]: Accepted publickey for core from 172.24.4.1 port 59238 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:46:38.774752 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:46:38.788029 systemd-logind[1443]: New session 2 of user core. May 17 03:46:38.798754 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 03:46:39.204553 kubelet[1556]: E0517 03:46:39.204449 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 03:46:39.209358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 03:46:39.209601 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 03:46:39.210040 systemd[1]: kubelet.service: Consumed 2.223s CPU time. May 17 03:46:39.494502 sshd[1549]: pam_unix(sshd:session): session closed for user core May 17 03:46:39.509482 systemd[1]: sshd@1-172.24.4.46:22-172.24.4.1:59238.service: Deactivated successfully. May 17 03:46:39.512910 systemd[1]: session-2.scope: Deactivated successfully. May 17 03:46:39.517296 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. May 17 03:46:39.524051 systemd[1]: Started sshd@2-172.24.4.46:22-172.24.4.1:59248.service - OpenSSH per-connection server daemon (172.24.4.1:59248). May 17 03:46:39.531352 systemd-logind[1443]: Removed session 2. May 17 03:46:40.592907 login[1529]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 03:46:40.601330 systemd-logind[1443]: New session 3 of user core. May 17 03:46:40.605444 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 03:46:40.603779 login[1530]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 03:46:40.617091 systemd-logind[1443]: New session 4 of user core. May 17 03:46:40.622642 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 03:46:40.877761 sshd[1570]: Accepted publickey for core from 172.24.4.1 port 59248 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:46:40.880887 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:46:40.890684 systemd-logind[1443]: New session 5 of user core. May 17 03:46:40.905709 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 03:46:41.462083 coreos-metadata[1424]: May 17 03:46:41.461 WARN failed to locate config-drive, using the metadata service API instead May 17 03:46:41.507814 coreos-metadata[1424]: May 17 03:46:41.507 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 17 03:46:41.548803 sshd[1570]: pam_unix(sshd:session): session closed for user core May 17 03:46:41.555543 systemd[1]: sshd@2-172.24.4.46:22-172.24.4.1:59248.service: Deactivated successfully. May 17 03:46:41.559348 systemd[1]: session-5.scope: Deactivated successfully. May 17 03:46:41.563173 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. May 17 03:46:41.565880 systemd-logind[1443]: Removed session 5. May 17 03:46:41.696807 coreos-metadata[1424]: May 17 03:46:41.696 INFO Fetch successful May 17 03:46:41.697046 coreos-metadata[1424]: May 17 03:46:41.696 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 17 03:46:41.713014 coreos-metadata[1424]: May 17 03:46:41.712 INFO Fetch successful May 17 03:46:41.713014 coreos-metadata[1424]: May 17 03:46:41.712 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 17 03:46:41.723004 coreos-metadata[1424]: May 17 03:46:41.722 INFO Fetch successful May 17 03:46:41.723004 coreos-metadata[1424]: May 17 03:46:41.722 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 17 03:46:41.735507 coreos-metadata[1424]: May 17 03:46:41.735 INFO Fetch successful May 17 03:46:41.735507 coreos-metadata[1424]: May 17 03:46:41.735 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 17 03:46:41.745851 coreos-metadata[1424]: May 17 03:46:41.745 INFO Fetch successful May 17 03:46:41.745851 coreos-metadata[1424]: May 17 03:46:41.745 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 17 03:46:41.755934 coreos-metadata[1424]: May 17 03:46:41.755 INFO Fetch successful May 17 03:46:41.807860 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 03:46:41.809668 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 03:46:41.917634 coreos-metadata[1487]: May 17 03:46:41.917 WARN failed to locate config-drive, using the metadata service API instead May 17 03:46:41.960046 coreos-metadata[1487]: May 17 03:46:41.959 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 17 03:46:41.971501 coreos-metadata[1487]: May 17 03:46:41.971 INFO Fetch successful May 17 03:46:41.971501 coreos-metadata[1487]: May 17 03:46:41.971 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 03:46:41.980931 coreos-metadata[1487]: May 17 03:46:41.980 INFO Fetch successful May 17 03:46:42.016602 unknown[1487]: wrote ssh authorized keys file for user: core May 17 03:46:42.060918 update-ssh-keys[1611]: Updated "/home/core/.ssh/authorized_keys" May 17 03:46:42.063818 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 03:46:42.068918 systemd[1]: Finished sshkeys.service. May 17 03:46:42.071063 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 03:46:42.072388 systemd[1]: Startup finished in 1.224s (kernel) + 16.388s (initrd) + 10.935s (userspace) = 28.548s. May 17 03:46:49.422310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 03:46:49.428570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 03:46:49.778262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 03:46:49.788455 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 03:46:49.942498 kubelet[1623]: E0517 03:46:49.942412 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 03:46:49.949983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 03:46:49.950412 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 03:46:51.580804 systemd[1]: Started sshd@3-172.24.4.46:22-172.24.4.1:56366.service - OpenSSH per-connection server daemon (172.24.4.1:56366). May 17 03:46:53.179920 sshd[1631]: Accepted publickey for core from 172.24.4.1 port 56366 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:46:53.182841 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:46:53.194515 systemd-logind[1443]: New session 6 of user core. May 17 03:46:53.201538 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 03:46:53.990608 sshd[1631]: pam_unix(sshd:session): session closed for user core May 17 03:46:54.003109 systemd[1]: sshd@3-172.24.4.46:22-172.24.4.1:56366.service: Deactivated successfully. May 17 03:46:54.006901 systemd[1]: session-6.scope: Deactivated successfully. May 17 03:46:54.011589 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. May 17 03:46:54.016898 systemd[1]: Started sshd@4-172.24.4.46:22-172.24.4.1:52988.service - OpenSSH per-connection server daemon (172.24.4.1:52988). May 17 03:46:54.020043 systemd-logind[1443]: Removed session 6. May 17 03:46:55.624690 sshd[1638]: Accepted publickey for core from 172.24.4.1 port 52988 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:46:55.627903 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:46:55.641593 systemd-logind[1443]: New session 7 of user core. May 17 03:46:55.652531 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 03:46:56.296927 sshd[1638]: pam_unix(sshd:session): session closed for user core May 17 03:46:56.309641 systemd[1]: sshd@4-172.24.4.46:22-172.24.4.1:52988.service: Deactivated successfully. May 17 03:46:56.313749 systemd[1]: session-7.scope: Deactivated successfully. May 17 03:46:56.317640 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. May 17 03:46:56.322837 systemd[1]: Started sshd@5-172.24.4.46:22-172.24.4.1:53002.service - OpenSSH per-connection server daemon (172.24.4.1:53002). May 17 03:46:56.325895 systemd-logind[1443]: Removed session 7. May 17 03:46:57.571733 sshd[1645]: Accepted publickey for core from 172.24.4.1 port 53002 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:46:57.575039 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:46:57.587331 systemd-logind[1443]: New session 8 of user core. May 17 03:46:57.598611 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 03:46:58.282308 sshd[1645]: pam_unix(sshd:session): session closed for user core May 17 03:46:58.294561 systemd[1]: sshd@5-172.24.4.46:22-172.24.4.1:53002.service: Deactivated successfully. May 17 03:46:58.297798 systemd[1]: session-8.scope: Deactivated successfully. May 17 03:46:58.300536 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. May 17 03:46:58.306808 systemd[1]: Started sshd@6-172.24.4.46:22-172.24.4.1:53010.service - OpenSSH per-connection server daemon (172.24.4.1:53010). May 17 03:46:58.310149 systemd-logind[1443]: Removed session 8. May 17 03:46:59.642066 sshd[1652]: Accepted publickey for core from 172.24.4.1 port 53010 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:46:59.644869 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:46:59.656637 systemd-logind[1443]: New session 9 of user core. May 17 03:46:59.664571 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 03:47:00.094875 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 03:47:00.095991 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 03:47:00.098646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 03:47:00.113967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 03:47:00.116394 sudo[1655]: pam_unix(sudo:session): session closed for user root May 17 03:47:00.267398 sshd[1652]: pam_unix(sshd:session): session closed for user core May 17 03:47:00.284111 systemd[1]: sshd@6-172.24.4.46:22-172.24.4.1:53010.service: Deactivated successfully. May 17 03:47:00.289276 systemd[1]: session-9.scope: Deactivated successfully. May 17 03:47:00.296518 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. May 17 03:47:00.302768 systemd[1]: Started sshd@7-172.24.4.46:22-172.24.4.1:53014.service - OpenSSH per-connection server daemon (172.24.4.1:53014). May 17 03:47:00.305994 systemd-logind[1443]: Removed session 9. May 17 03:47:00.497748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 03:47:00.514846 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 03:47:00.616563 kubelet[1669]: E0517 03:47:00.616413 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 03:47:00.620646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 03:47:00.620944 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 03:47:01.653252 sshd[1663]: Accepted publickey for core from 172.24.4.1 port 53014 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:47:01.656618 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:47:01.667999 systemd-logind[1443]: New session 10 of user core. May 17 03:47:01.674653 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 03:47:02.069002 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 03:47:02.069721 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 03:47:02.076666 sudo[1679]: pam_unix(sudo:session): session closed for user root May 17 03:47:02.085503 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 03:47:02.085936 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 03:47:02.114740 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 03:47:02.117490 auditctl[1682]: No rules May 17 03:47:02.118318 systemd[1]: audit-rules.service: Deactivated successfully. May 17 03:47:02.118711 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 03:47:02.125961 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 03:47:02.187780 augenrules[1700]: No rules May 17 03:47:02.190265 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 03:47:02.193164 sudo[1678]: pam_unix(sudo:session): session closed for user root May 17 03:47:02.462406 sshd[1663]: pam_unix(sshd:session): session closed for user core May 17 03:47:02.476528 systemd[1]: sshd@7-172.24.4.46:22-172.24.4.1:53014.service: Deactivated successfully. May 17 03:47:02.480519 systemd[1]: session-10.scope: Deactivated successfully. May 17 03:47:02.484402 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. May 17 03:47:02.490742 systemd[1]: Started sshd@8-172.24.4.46:22-172.24.4.1:53028.service - OpenSSH per-connection server daemon (172.24.4.1:53028). May 17 03:47:02.493843 systemd-logind[1443]: Removed session 10. May 17 03:47:04.099501 sshd[1708]: Accepted publickey for core from 172.24.4.1 port 53028 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:47:04.102444 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:47:04.111921 systemd-logind[1443]: New session 11 of user core. May 17 03:47:04.121584 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 03:47:04.513522 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 03:47:04.514273 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 03:47:05.282442 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 03:47:05.293868 (dockerd)[1727]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 03:47:05.839633 systemd-timesyncd[1370]: Contacted time server 172.235.32.243:123 (2.flatcar.pool.ntp.org). May 17 03:47:05.839707 systemd-timesyncd[1370]: Initial clock synchronization to Sat 2025-05-17 03:47:06.228310 UTC. May 17 03:47:05.949099 dockerd[1727]: time="2025-05-17T03:47:05.948997061Z" level=info msg="Starting up" May 17 03:47:06.149859 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport250634290-merged.mount: Deactivated successfully. May 17 03:47:06.206211 dockerd[1727]: time="2025-05-17T03:47:06.205949533Z" level=info msg="Loading containers: start." May 17 03:47:06.333297 kernel: Initializing XFRM netlink socket May 17 03:47:06.469550 systemd-networkd[1368]: docker0: Link UP May 17 03:47:06.490021 dockerd[1727]: time="2025-05-17T03:47:06.489931261Z" level=info msg="Loading containers: done." May 17 03:47:06.521330 dockerd[1727]: time="2025-05-17T03:47:06.521117169Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 03:47:06.521330 dockerd[1727]: time="2025-05-17T03:47:06.521267530Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 03:47:06.521891 dockerd[1727]: time="2025-05-17T03:47:06.521626843Z" level=info msg="Daemon has completed initialization" May 17 03:47:06.575134 dockerd[1727]: time="2025-05-17T03:47:06.574570258Z" level=info msg="API listen on /run/docker.sock" May 17 03:47:06.574722 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 03:47:07.141601 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2740175865-merged.mount: Deactivated successfully. May 17 03:47:07.937579 containerd[1462]: time="2025-05-17T03:47:07.936790508Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 17 03:47:08.744684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2816621947.mount: Deactivated successfully. May 17 03:47:10.672629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 03:47:10.681495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 03:47:10.799996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 03:47:10.813518 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 03:47:10.855317 kubelet[1930]: E0517 03:47:10.855252 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 03:47:10.857492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 03:47:10.857636 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 03:47:11.072680 containerd[1462]: time="2025-05-17T03:47:11.072278577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:11.075847 containerd[1462]: time="2025-05-17T03:47:11.075723389Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075411" May 17 03:47:11.080104 containerd[1462]: time="2025-05-17T03:47:11.079916199Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:11.096285 containerd[1462]: time="2025-05-17T03:47:11.095101541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:11.098520 containerd[1462]: time="2025-05-17T03:47:11.098444765Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 3.161519878s" May 17 03:47:11.098820 containerd[1462]: time="2025-05-17T03:47:11.098740779Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 17 03:47:11.100733 containerd[1462]: time="2025-05-17T03:47:11.100688617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 17 03:47:13.231418 containerd[1462]: time="2025-05-17T03:47:13.231333383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:13.234902 containerd[1462]: time="2025-05-17T03:47:13.233252593Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011398" May 17 03:47:13.236395 containerd[1462]: time="2025-05-17T03:47:13.236330362Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:13.247297 containerd[1462]: time="2025-05-17T03:47:13.247164609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:13.250546 containerd[1462]: time="2025-05-17T03:47:13.250465110Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 2.149561841s" May 17 03:47:13.250822 containerd[1462]: time="2025-05-17T03:47:13.250738965Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 17 03:47:13.252542 containerd[1462]: time="2025-05-17T03:47:13.252462090Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 17 03:47:15.051264 containerd[1462]: time="2025-05-17T03:47:15.050383033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:15.051969 containerd[1462]: time="2025-05-17T03:47:15.051927216Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148968" May 17 03:47:15.055229 containerd[1462]: time="2025-05-17T03:47:15.053618876Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:15.060719 containerd[1462]: time="2025-05-17T03:47:15.060681488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:15.061924 containerd[1462]: time="2025-05-17T03:47:15.061890359Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.808906999s" May 17 03:47:15.062009 containerd[1462]: time="2025-05-17T03:47:15.061991331Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 17 03:47:15.062575 containerd[1462]: time="2025-05-17T03:47:15.062547014Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 03:47:16.557510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805663789.mount: Deactivated successfully. May 17 03:47:17.201878 containerd[1462]: time="2025-05-17T03:47:17.201785314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:17.203001 containerd[1462]: time="2025-05-17T03:47:17.202946226Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889083" May 17 03:47:17.204681 containerd[1462]: time="2025-05-17T03:47:17.204634606Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:17.211218 containerd[1462]: time="2025-05-17T03:47:17.211062858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:17.211857 containerd[1462]: time="2025-05-17T03:47:17.211775001Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 2.149121167s" May 17 03:47:17.211911 containerd[1462]: time="2025-05-17T03:47:17.211862209Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 17 03:47:17.213562 containerd[1462]: time="2025-05-17T03:47:17.213363895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 17 03:47:17.829311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1939716766.mount: Deactivated successfully. May 17 03:47:19.522792 containerd[1462]: time="2025-05-17T03:47:19.522377205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:19.524893 containerd[1462]: time="2025-05-17T03:47:19.524818537Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" May 17 03:47:19.526749 containerd[1462]: time="2025-05-17T03:47:19.526707357Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:19.533226 containerd[1462]: time="2025-05-17T03:47:19.532234424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:19.533360 containerd[1462]: time="2025-05-17T03:47:19.533329879Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.319914771s" May 17 03:47:19.533449 containerd[1462]: time="2025-05-17T03:47:19.533429511Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 17 03:47:19.534816 containerd[1462]: time="2025-05-17T03:47:19.534794345Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 03:47:19.535614 update_engine[1446]: I20250517 03:47:19.535557 1446 update_attempter.cc:509] Updating boot flags... May 17 03:47:19.564283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2012) May 17 03:47:19.613271 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2016) May 17 03:47:20.113097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817632727.mount: Deactivated successfully. May 17 03:47:20.128771 containerd[1462]: time="2025-05-17T03:47:20.128679533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:20.130538 containerd[1462]: time="2025-05-17T03:47:20.130455264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 17 03:47:20.134256 containerd[1462]: time="2025-05-17T03:47:20.131973841Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:20.139048 containerd[1462]: time="2025-05-17T03:47:20.138961846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:20.141491 containerd[1462]: time="2025-05-17T03:47:20.141426446Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 606.524557ms" May 17 03:47:20.141713 containerd[1462]: time="2025-05-17T03:47:20.141489190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 03:47:20.143810 containerd[1462]: time="2025-05-17T03:47:20.143770242Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 17 03:47:20.923857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 03:47:20.931496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 03:47:21.057334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 03:47:21.065980 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 03:47:21.141870 kubelet[2035]: E0517 03:47:21.139888 2035 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 03:47:21.145355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 03:47:21.145498 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 03:47:24.209344 containerd[1462]: time="2025-05-17T03:47:24.209158910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:24.212556 containerd[1462]: time="2025-05-17T03:47:24.212367301Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142747" May 17 03:47:24.216250 containerd[1462]: time="2025-05-17T03:47:24.214182845Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:24.223103 containerd[1462]: time="2025-05-17T03:47:24.223026639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:24.226818 containerd[1462]: time="2025-05-17T03:47:24.226752631Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.082924597s" May 17 03:47:24.227034 containerd[1462]: time="2025-05-17T03:47:24.226990416Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 17 03:47:28.788141 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 03:47:28.796745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 03:47:28.859344 systemd[1]: Reloading requested from client PID 2076 ('systemctl') (unit session-11.scope)... May 17 03:47:28.859362 systemd[1]: Reloading... May 17 03:47:28.945246 zram_generator::config[2112]: No configuration found. May 17 03:47:29.100914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 03:47:29.189601 systemd[1]: Reloading finished in 329 ms. May 17 03:47:29.259215 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 03:47:29.259316 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 03:47:29.259956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 03:47:29.267517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 03:47:30.558803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 03:47:30.575766 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 03:47:30.663794 kubelet[2181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 03:47:30.663794 kubelet[2181]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 03:47:30.663794 kubelet[2181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 03:47:30.663794 kubelet[2181]: I0517 03:47:30.663810 2181 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 03:47:31.965262 kubelet[2181]: I0517 03:47:31.965051 2181 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 03:47:31.965262 kubelet[2181]: I0517 03:47:31.965087 2181 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 03:47:31.965968 kubelet[2181]: I0517 03:47:31.965402 2181 server.go:956] "Client rotation is on, will bootstrap in background" May 17 03:47:31.998927 kubelet[2181]: I0517 03:47:31.998371 2181 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 03:47:32.001082 kubelet[2181]: E0517 03:47:32.000999 2181 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.24.4.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 03:47:32.008180 kubelet[2181]: E0517 03:47:32.008110 2181 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 03:47:32.008271 kubelet[2181]: I0517 03:47:32.008186 2181 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 03:47:32.017558 kubelet[2181]: I0517 03:47:32.017504 2181 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 03:47:32.018049 kubelet[2181]: I0517 03:47:32.017983 2181 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 03:47:32.018476 kubelet[2181]: I0517 03:47:32.018046 2181 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-2f0bbd4ac2.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 03:47:32.018582 kubelet[2181]: I0517 03:47:32.018482 2181 topology_manager.go:138] "Creating topology manager with none policy" May 17 03:47:32.018582 kubelet[2181]: I0517 03:47:32.018512 2181 container_manager_linux.go:303] "Creating device plugin manager" May 17 03:47:32.018810 kubelet[2181]: I0517 03:47:32.018765 2181 state_mem.go:36] "Initialized new in-memory state store" May 17 03:47:32.026488 kubelet[2181]: I0517 03:47:32.025987 2181 kubelet.go:480] "Attempting to sync node with API server" May 17 03:47:32.026488 kubelet[2181]: I0517 03:47:32.026048 2181 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 03:47:32.026488 kubelet[2181]: I0517 03:47:32.026099 2181 kubelet.go:386] "Adding apiserver pod source" May 17 03:47:32.026488 kubelet[2181]: I0517 03:47:32.026130 2181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 03:47:32.036718 kubelet[2181]: E0517 03:47:32.036187 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.24.4.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-2f0bbd4ac2.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 03:47:32.036718 kubelet[2181]: E0517 03:47:32.036578 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.24.4.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 03:47:32.037285 kubelet[2181]: I0517 03:47:32.037227 2181 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 03:47:32.037794 kubelet[2181]: I0517 03:47:32.037743 2181 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 03:47:32.039648 kubelet[2181]: W0517 03:47:32.039582 2181 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 03:47:32.051181 kubelet[2181]: I0517 03:47:32.051136 2181 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 03:47:32.051357 kubelet[2181]: I0517 03:47:32.051228 2181 server.go:1289] "Started kubelet" May 17 03:47:32.057283 kubelet[2181]: I0517 03:47:32.056781 2181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 03:47:32.064141 kubelet[2181]: I0517 03:47:32.064089 2181 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 03:47:32.067970 kubelet[2181]: I0517 03:47:32.067935 2181 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 03:47:32.068546 kubelet[2181]: E0517 03:47:32.068484 2181 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" May 17 03:47:32.073970 kubelet[2181]: I0517 03:47:32.073568 2181 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 03:47:32.076437 kubelet[2181]: I0517 03:47:32.074601 2181 server.go:317] "Adding debug handlers to kubelet server" May 17 03:47:32.076437 kubelet[2181]: I0517 03:47:32.062940 2181 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 03:47:32.076437 kubelet[2181]: I0517 03:47:32.075856 2181 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 03:47:32.080012 kubelet[2181]: I0517 03:47:32.079970 2181 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 03:47:32.080131 kubelet[2181]: I0517 03:47:32.080029 2181 reconciler.go:26] "Reconciler: start to sync state" May 17 03:47:32.080955 kubelet[2181]: E0517 03:47:32.080915 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-2f0bbd4ac2.novalocal?timeout=10s\": dial tcp 172.24.4.46:6443: connect: connection refused" interval="200ms" May 17 03:47:32.082508 kubelet[2181]: E0517 03:47:32.082464 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.24.4.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 03:47:32.084158 kubelet[2181]: E0517 03:47:32.082521 2181 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.46:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-2f0bbd4ac2.novalocal.184033d68ff7f7da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-2f0bbd4ac2.novalocal,UID:ci-4081-3-3-n-2f0bbd4ac2.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-2f0bbd4ac2.novalocal,},FirstTimestamp:2025-05-17 03:47:32.051163098 +0000 UTC m=+1.467085700,LastTimestamp:2025-05-17 03:47:32.051163098 +0000 UTC m=+1.467085700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-2f0bbd4ac2.novalocal,}" May 17 03:47:32.085756 kubelet[2181]: I0517 03:47:32.084342 2181 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 03:47:32.086670 kubelet[2181]: I0517 03:47:32.086613 2181 factory.go:223] Registration of the containerd container factory successfully May 17 03:47:32.086670 kubelet[2181]: I0517 03:47:32.086636 2181 factory.go:223] Registration of the systemd container factory successfully May 17 03:47:32.100159 kubelet[2181]: I0517 03:47:32.100111 2181 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 03:47:32.101599 kubelet[2181]: I0517 03:47:32.101498 2181 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 03:47:32.101599 kubelet[2181]: I0517 03:47:32.101519 2181 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 03:47:32.101599 kubelet[2181]: I0517 03:47:32.101585 2181 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 03:47:32.101599 kubelet[2181]: I0517 03:47:32.101594 2181 kubelet.go:2436] "Starting kubelet main sync loop" May 17 03:47:32.101724 kubelet[2181]: E0517 03:47:32.101629 2181 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 03:47:32.104686 kubelet[2181]: E0517 03:47:32.104659 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.24.4.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 03:47:32.106413 kubelet[2181]: I0517 03:47:32.106337 2181 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 03:47:32.106413 kubelet[2181]: I0517 03:47:32.106354 2181 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 03:47:32.106413 kubelet[2181]: I0517 03:47:32.106369 2181 state_mem.go:36] "Initialized new in-memory state store" May 17 03:47:32.111674 kubelet[2181]: I0517 03:47:32.111631 2181 policy_none.go:49] "None policy: Start" May 17 03:47:32.111674 kubelet[2181]: I0517 03:47:32.111658 2181 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 03:47:32.112086 kubelet[2181]: I0517 03:47:32.111669 2181 state_mem.go:35] "Initializing new in-memory state store" May 17 03:47:32.118841 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 03:47:32.140278 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 03:47:32.143515 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 03:47:32.152055 kubelet[2181]: E0517 03:47:32.152025 2181 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 03:47:32.152413 kubelet[2181]: I0517 03:47:32.152259 2181 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 03:47:32.152413 kubelet[2181]: I0517 03:47:32.152272 2181 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 03:47:32.152685 kubelet[2181]: I0517 03:47:32.152648 2181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 03:47:32.154704 kubelet[2181]: E0517 03:47:32.154135 2181 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 03:47:32.154704 kubelet[2181]: E0517 03:47:32.154177 2181 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" May 17 03:47:32.237813 systemd[1]: Created slice kubepods-burstable-pod8a601e008b6f3d41db82375f6b9e77be.slice - libcontainer container kubepods-burstable-pod8a601e008b6f3d41db82375f6b9e77be.slice. May 17 03:47:32.247952 kubelet[2181]: E0517 03:47:32.247844 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.258000 systemd[1]: Created slice kubepods-burstable-pod927185f9a2c38372efbe1bfdeb2d535d.slice - libcontainer container kubepods-burstable-pod927185f9a2c38372efbe1bfdeb2d535d.slice. May 17 03:47:32.259137 kubelet[2181]: I0517 03:47:32.258885 2181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.260296 kubelet[2181]: E0517 03:47:32.259987 2181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.46:6443/api/v1/nodes\": dial tcp 172.24.4.46:6443: connect: connection refused" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.263659 kubelet[2181]: E0517 03:47:32.263618 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.267968 systemd[1]: Created slice kubepods-burstable-podcedd7e8c8f87b652245624c450f39fdd.slice - libcontainer container kubepods-burstable-podcedd7e8c8f87b652245624c450f39fdd.slice. May 17 03:47:32.273925 kubelet[2181]: E0517 03:47:32.273880 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.282362 kubelet[2181]: E0517 03:47:32.282297 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-2f0bbd4ac2.novalocal?timeout=10s\": dial tcp 172.24.4.46:6443: connect: connection refused" interval="400ms" May 17 03:47:32.381314 kubelet[2181]: I0517 03:47:32.381119 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/927185f9a2c38372efbe1bfdeb2d535d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"927185f9a2c38372efbe1bfdeb2d535d\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.381314 kubelet[2181]: I0517 03:47:32.381278 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cedd7e8c8f87b652245624c450f39fdd-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"cedd7e8c8f87b652245624c450f39fdd\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.381314 kubelet[2181]: I0517 03:47:32.381338 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/927185f9a2c38372efbe1bfdeb2d535d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"927185f9a2c38372efbe1bfdeb2d535d\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.381669 kubelet[2181]: I0517 03:47:32.381385 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/927185f9a2c38372efbe1bfdeb2d535d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"927185f9a2c38372efbe1bfdeb2d535d\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.381669 kubelet[2181]: I0517 03:47:32.381431 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a601e008b6f3d41db82375f6b9e77be-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"8a601e008b6f3d41db82375f6b9e77be\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.381669 kubelet[2181]: I0517 03:47:32.381486 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a601e008b6f3d41db82375f6b9e77be-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"8a601e008b6f3d41db82375f6b9e77be\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.381669 kubelet[2181]: I0517 03:47:32.381546 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a601e008b6f3d41db82375f6b9e77be-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"8a601e008b6f3d41db82375f6b9e77be\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.382040 kubelet[2181]: I0517 03:47:32.381594 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/927185f9a2c38372efbe1bfdeb2d535d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"927185f9a2c38372efbe1bfdeb2d535d\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.382040 kubelet[2181]: I0517 03:47:32.381641 2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/927185f9a2c38372efbe1bfdeb2d535d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"927185f9a2c38372efbe1bfdeb2d535d\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.464650 kubelet[2181]: I0517 03:47:32.464118 2181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.464936 kubelet[2181]: E0517 03:47:32.464838 2181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.46:6443/api/v1/nodes\": dial tcp 172.24.4.46:6443: connect: connection refused" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.551709 containerd[1462]: time="2025-05-17T03:47:32.551347215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal,Uid:8a601e008b6f3d41db82375f6b9e77be,Namespace:kube-system,Attempt:0,}" May 17 03:47:32.568580 containerd[1462]: time="2025-05-17T03:47:32.567820946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal,Uid:927185f9a2c38372efbe1bfdeb2d535d,Namespace:kube-system,Attempt:0,}" May 17 03:47:32.576279 containerd[1462]: time="2025-05-17T03:47:32.575690409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal,Uid:cedd7e8c8f87b652245624c450f39fdd,Namespace:kube-system,Attempt:0,}" May 17 03:47:32.684078 kubelet[2181]: E0517 03:47:32.683971 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-2f0bbd4ac2.novalocal?timeout=10s\": dial tcp 172.24.4.46:6443: connect: connection refused" interval="800ms" May 17 03:47:32.867552 kubelet[2181]: I0517 03:47:32.867136 2181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.867552 kubelet[2181]: E0517 03:47:32.867443 2181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.46:6443/api/v1/nodes\": dial tcp 172.24.4.46:6443: connect: connection refused" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:32.917604 kubelet[2181]: E0517 03:47:32.917506 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.24.4.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 03:47:32.931847 kubelet[2181]: E0517 03:47:32.931754 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.24.4.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 03:47:33.137125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480288720.mount: Deactivated successfully. May 17 03:47:33.148366 containerd[1462]: time="2025-05-17T03:47:33.148162336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 03:47:33.154283 containerd[1462]: time="2025-05-17T03:47:33.153971523Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" May 17 03:47:33.156716 containerd[1462]: time="2025-05-17T03:47:33.156446123Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 03:47:33.159729 containerd[1462]: time="2025-05-17T03:47:33.159628133Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 03:47:33.163967 containerd[1462]: time="2025-05-17T03:47:33.163855259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 03:47:33.164076 containerd[1462]: time="2025-05-17T03:47:33.164000543Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 03:47:33.165631 containerd[1462]: time="2025-05-17T03:47:33.165311539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 03:47:33.174921 containerd[1462]: time="2025-05-17T03:47:33.174804928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 03:47:33.180350 containerd[1462]: time="2025-05-17T03:47:33.179355206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 603.530457ms" May 17 03:47:33.185457 containerd[1462]: time="2025-05-17T03:47:33.185360730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.844348ms" May 17 03:47:33.196061 containerd[1462]: time="2025-05-17T03:47:33.195953319Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.933395ms" May 17 03:47:33.308415 kubelet[2181]: E0517 03:47:33.308333 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.24.4.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 03:47:33.399259 containerd[1462]: time="2025-05-17T03:47:33.398540713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:47:33.403024 containerd[1462]: time="2025-05-17T03:47:33.400632552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:47:33.403024 containerd[1462]: time="2025-05-17T03:47:33.400710881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:33.403024 containerd[1462]: time="2025-05-17T03:47:33.401120610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:33.410053 containerd[1462]: time="2025-05-17T03:47:33.409520729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:47:33.410053 containerd[1462]: time="2025-05-17T03:47:33.409652129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:47:33.410053 containerd[1462]: time="2025-05-17T03:47:33.409698788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:33.410053 containerd[1462]: time="2025-05-17T03:47:33.409873175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:33.437940 systemd[1]: Started cri-containerd-1c1d9b14516e00849227556caefc6ae5443de95eb8303c13508e298bd438a938.scope - libcontainer container 1c1d9b14516e00849227556caefc6ae5443de95eb8303c13508e298bd438a938. May 17 03:47:33.444684 containerd[1462]: time="2025-05-17T03:47:33.439961598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:47:33.444684 containerd[1462]: time="2025-05-17T03:47:33.444495243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:47:33.444684 containerd[1462]: time="2025-05-17T03:47:33.444511183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:33.446582 containerd[1462]: time="2025-05-17T03:47:33.445995964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:33.454478 systemd[1]: Started cri-containerd-eb90b397b30ccdd5d7febd0674b9c3423bf81d86f99e1cd80f911f73bc8151ca.scope - libcontainer container eb90b397b30ccdd5d7febd0674b9c3423bf81d86f99e1cd80f911f73bc8151ca. May 17 03:47:33.477365 systemd[1]: Started cri-containerd-bdcf34e8de77c0715c514c0e9536c56aa30fc8655e992de5ef3003c62f32752e.scope - libcontainer container bdcf34e8de77c0715c514c0e9536c56aa30fc8655e992de5ef3003c62f32752e. May 17 03:47:33.485748 kubelet[2181]: E0517 03:47:33.485689 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-2f0bbd4ac2.novalocal?timeout=10s\": dial tcp 172.24.4.46:6443: connect: connection refused" interval="1.6s" May 17 03:47:33.517922 containerd[1462]: time="2025-05-17T03:47:33.517764701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal,Uid:927185f9a2c38372efbe1bfdeb2d535d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c1d9b14516e00849227556caefc6ae5443de95eb8303c13508e298bd438a938\"" May 17 03:47:33.532288 containerd[1462]: time="2025-05-17T03:47:33.532077939Z" level=info msg="CreateContainer within sandbox \"1c1d9b14516e00849227556caefc6ae5443de95eb8303c13508e298bd438a938\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 03:47:33.550306 containerd[1462]: time="2025-05-17T03:47:33.549728151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal,Uid:cedd7e8c8f87b652245624c450f39fdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb90b397b30ccdd5d7febd0674b9c3423bf81d86f99e1cd80f911f73bc8151ca\"" May 17 03:47:33.553843 containerd[1462]: time="2025-05-17T03:47:33.553802139Z" level=info msg="CreateContainer within sandbox \"1c1d9b14516e00849227556caefc6ae5443de95eb8303c13508e298bd438a938\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d98bb63225a931a46f18c0894964b686102a9708002d5269736826aa6c9bee8d\"" May 17 03:47:33.554430 containerd[1462]: time="2025-05-17T03:47:33.553971700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal,Uid:8a601e008b6f3d41db82375f6b9e77be,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdcf34e8de77c0715c514c0e9536c56aa30fc8655e992de5ef3003c62f32752e\"" May 17 03:47:33.554943 containerd[1462]: time="2025-05-17T03:47:33.554906815Z" level=info msg="StartContainer for \"d98bb63225a931a46f18c0894964b686102a9708002d5269736826aa6c9bee8d\"" May 17 03:47:33.558233 containerd[1462]: time="2025-05-17T03:47:33.558096088Z" level=info msg="CreateContainer within sandbox \"eb90b397b30ccdd5d7febd0674b9c3423bf81d86f99e1cd80f911f73bc8151ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 03:47:33.569583 containerd[1462]: time="2025-05-17T03:47:33.569435180Z" level=info msg="CreateContainer within sandbox \"bdcf34e8de77c0715c514c0e9536c56aa30fc8655e992de5ef3003c62f32752e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 03:47:33.580237 kubelet[2181]: E0517 03:47:33.580091 2181 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.24.4.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-2f0bbd4ac2.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 03:47:33.592360 containerd[1462]: time="2025-05-17T03:47:33.592285746Z" level=info msg="CreateContainer within sandbox \"eb90b397b30ccdd5d7febd0674b9c3423bf81d86f99e1cd80f911f73bc8151ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"acfe508e037e7003eeb7b688c0fed4308f66a3bab18a88e974e39b1912c4a2ef\"" May 17 03:47:33.594302 containerd[1462]: time="2025-05-17T03:47:33.594238952Z" level=info msg="StartContainer for \"acfe508e037e7003eeb7b688c0fed4308f66a3bab18a88e974e39b1912c4a2ef\"" May 17 03:47:33.596918 systemd[1]: Started cri-containerd-d98bb63225a931a46f18c0894964b686102a9708002d5269736826aa6c9bee8d.scope - libcontainer container d98bb63225a931a46f18c0894964b686102a9708002d5269736826aa6c9bee8d. May 17 03:47:33.604420 containerd[1462]: time="2025-05-17T03:47:33.604380410Z" level=info msg="CreateContainer within sandbox \"bdcf34e8de77c0715c514c0e9536c56aa30fc8655e992de5ef3003c62f32752e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d0387eb09e6a274be96dd9082f9dcbcadd75dd97a5746bb487454ba56c20f46\"" May 17 03:47:33.605689 containerd[1462]: time="2025-05-17T03:47:33.605650667Z" level=info msg="StartContainer for \"7d0387eb09e6a274be96dd9082f9dcbcadd75dd97a5746bb487454ba56c20f46\"" May 17 03:47:33.630362 systemd[1]: Started cri-containerd-acfe508e037e7003eeb7b688c0fed4308f66a3bab18a88e974e39b1912c4a2ef.scope - libcontainer container acfe508e037e7003eeb7b688c0fed4308f66a3bab18a88e974e39b1912c4a2ef. May 17 03:47:33.656507 systemd[1]: Started cri-containerd-7d0387eb09e6a274be96dd9082f9dcbcadd75dd97a5746bb487454ba56c20f46.scope - libcontainer container 7d0387eb09e6a274be96dd9082f9dcbcadd75dd97a5746bb487454ba56c20f46. May 17 03:47:33.677100 kubelet[2181]: I0517 03:47:33.677040 2181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:33.678100 kubelet[2181]: E0517 03:47:33.677700 2181 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.46:6443/api/v1/nodes\": dial tcp 172.24.4.46:6443: connect: connection refused" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:33.688630 containerd[1462]: time="2025-05-17T03:47:33.688582112Z" level=info msg="StartContainer for \"d98bb63225a931a46f18c0894964b686102a9708002d5269736826aa6c9bee8d\" returns successfully" May 17 03:47:33.714376 containerd[1462]: time="2025-05-17T03:47:33.714006094Z" level=info msg="StartContainer for \"acfe508e037e7003eeb7b688c0fed4308f66a3bab18a88e974e39b1912c4a2ef\" returns successfully" May 17 03:47:33.749302 containerd[1462]: time="2025-05-17T03:47:33.749168378Z" level=info msg="StartContainer for \"7d0387eb09e6a274be96dd9082f9dcbcadd75dd97a5746bb487454ba56c20f46\" returns successfully" May 17 03:47:34.134800 kubelet[2181]: E0517 03:47:34.134037 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:34.141613 kubelet[2181]: E0517 03:47:34.141583 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:34.143630 kubelet[2181]: E0517 03:47:34.143608 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:35.145594 kubelet[2181]: E0517 03:47:35.145102 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:35.145594 kubelet[2181]: E0517 03:47:35.145475 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:35.280876 kubelet[2181]: I0517 03:47:35.280849 2181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:35.679155 kubelet[2181]: E0517 03:47:35.678963 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:36.149359 kubelet[2181]: E0517 03:47:36.149051 2181 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:36.342461 kubelet[2181]: E0517 03:47:36.342415 2181 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:36.413758 kubelet[2181]: I0517 03:47:36.413247 2181 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:36.469833 kubelet[2181]: I0517 03:47:36.469786 2181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:36.476813 kubelet[2181]: E0517 03:47:36.476770 2181 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:36.476813 kubelet[2181]: I0517 03:47:36.476813 2181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:36.479274 kubelet[2181]: E0517 03:47:36.478658 2181 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:36.479274 kubelet[2181]: I0517 03:47:36.478694 2181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:36.483259 kubelet[2181]: E0517 03:47:36.483218 2181 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:37.039526 kubelet[2181]: I0517 03:47:37.039461 2181 apiserver.go:52] "Watching apiserver" May 17 03:47:37.087238 kubelet[2181]: I0517 03:47:37.084604 2181 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 03:47:39.177836 systemd[1]: Reloading requested from client PID 2465 ('systemctl') (unit session-11.scope)... May 17 03:47:39.177880 systemd[1]: Reloading... May 17 03:47:39.301898 zram_generator::config[2504]: No configuration found. May 17 03:47:39.456014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 03:47:39.560815 systemd[1]: Reloading finished in 382 ms. May 17 03:47:39.621777 kubelet[2181]: I0517 03:47:39.621533 2181 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 03:47:39.621674 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 03:47:39.637407 systemd[1]: kubelet.service: Deactivated successfully. May 17 03:47:39.637714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 03:47:39.637786 systemd[1]: kubelet.service: Consumed 2.181s CPU time, 131.0M memory peak, 0B memory swap peak. May 17 03:47:39.644442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 03:47:40.011596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 03:47:40.024609 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 03:47:40.093559 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 03:47:40.093559 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 03:47:40.093559 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 03:47:40.093559 kubelet[2567]: I0517 03:47:40.091982 2567 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 03:47:40.099595 kubelet[2567]: I0517 03:47:40.099556 2567 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 03:47:40.099595 kubelet[2567]: I0517 03:47:40.099587 2567 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 03:47:40.099921 kubelet[2567]: I0517 03:47:40.099845 2567 server.go:956] "Client rotation is on, will bootstrap in background" May 17 03:47:40.101381 kubelet[2567]: I0517 03:47:40.101355 2567 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 17 03:47:40.103779 kubelet[2567]: I0517 03:47:40.103735 2567 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 03:47:40.108240 kubelet[2567]: E0517 03:47:40.107541 2567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 03:47:40.108240 kubelet[2567]: I0517 03:47:40.107572 2567 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 03:47:40.113273 kubelet[2567]: I0517 03:47:40.110964 2567 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 03:47:40.113273 kubelet[2567]: I0517 03:47:40.111171 2567 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 03:47:40.113273 kubelet[2567]: I0517 03:47:40.111214 2567 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-2f0bbd4ac2.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 03:47:40.113273 kubelet[2567]: I0517 03:47:40.111502 2567 topology_manager.go:138] "Creating topology manager with none policy" May 17 03:47:40.113582 kubelet[2567]: I0517 03:47:40.111514 2567 container_manager_linux.go:303] "Creating device plugin manager" May 17 03:47:40.113582 kubelet[2567]: I0517 03:47:40.111565 2567 state_mem.go:36] "Initialized new in-memory state store" May 17 03:47:40.113582 kubelet[2567]: I0517 03:47:40.111728 2567 kubelet.go:480] "Attempting to sync node with API server" May 17 03:47:40.113582 kubelet[2567]: I0517 03:47:40.111742 2567 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 03:47:40.113582 kubelet[2567]: I0517 03:47:40.111799 2567 kubelet.go:386] "Adding apiserver pod source" May 17 03:47:40.113582 kubelet[2567]: I0517 03:47:40.111826 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 03:47:40.118230 kubelet[2567]: I0517 03:47:40.117597 2567 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 03:47:40.118230 kubelet[2567]: I0517 03:47:40.118153 2567 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 03:47:40.123745 kubelet[2567]: I0517 03:47:40.123724 2567 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 03:47:40.123801 kubelet[2567]: I0517 03:47:40.123772 2567 server.go:1289] "Started kubelet" May 17 03:47:40.126648 kubelet[2567]: I0517 03:47:40.126121 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 03:47:40.127103 kubelet[2567]: I0517 03:47:40.127091 2567 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 03:47:40.127260 kubelet[2567]: I0517 03:47:40.127242 2567 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 03:47:40.127905 kubelet[2567]: I0517 03:47:40.127880 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 03:47:40.128299 kubelet[2567]: I0517 03:47:40.128285 2567 server.go:317] "Adding debug handlers to kubelet server" May 17 03:47:40.136287 kubelet[2567]: I0517 03:47:40.136253 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 03:47:40.138129 kubelet[2567]: I0517 03:47:40.138103 2567 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 03:47:40.141783 kubelet[2567]: E0517 03:47:40.141747 2567 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" not found" May 17 03:47:40.142953 kubelet[2567]: I0517 03:47:40.142922 2567 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 03:47:40.143084 kubelet[2567]: I0517 03:47:40.143061 2567 reconciler.go:26] "Reconciler: start to sync state" May 17 03:47:40.147479 kubelet[2567]: I0517 03:47:40.147440 2567 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 03:47:40.148379 kubelet[2567]: I0517 03:47:40.148366 2567 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 03:47:40.148680 kubelet[2567]: I0517 03:47:40.148449 2567 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 03:47:40.148680 kubelet[2567]: I0517 03:47:40.148470 2567 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 03:47:40.148680 kubelet[2567]: I0517 03:47:40.148477 2567 kubelet.go:2436] "Starting kubelet main sync loop" May 17 03:47:40.148680 kubelet[2567]: E0517 03:47:40.148511 2567 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 03:47:40.153046 kubelet[2567]: I0517 03:47:40.153013 2567 factory.go:223] Registration of the systemd container factory successfully May 17 03:47:40.153167 kubelet[2567]: I0517 03:47:40.153140 2567 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 03:47:40.155102 kubelet[2567]: I0517 03:47:40.154598 2567 factory.go:223] Registration of the containerd container factory successfully May 17 03:47:40.272009 kubelet[2567]: E0517 03:47:40.271811 2567 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 03:47:40.330300 kubelet[2567]: I0517 03:47:40.330252 2567 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 03:47:40.330521 kubelet[2567]: I0517 03:47:40.330507 2567 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 03:47:40.330613 kubelet[2567]: I0517 03:47:40.330604 2567 state_mem.go:36] "Initialized new in-memory state store" May 17 03:47:40.331348 kubelet[2567]: I0517 03:47:40.330934 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 03:47:40.331348 kubelet[2567]: I0517 03:47:40.331283 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 03:47:40.331348 kubelet[2567]: I0517 03:47:40.331313 2567 policy_none.go:49] "None policy: Start" May 17 03:47:40.331348 kubelet[2567]: I0517 03:47:40.331324 2567 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 03:47:40.331348 kubelet[2567]: I0517 03:47:40.331336 2567 state_mem.go:35] "Initializing new in-memory state store" May 17 03:47:40.331651 kubelet[2567]: I0517 03:47:40.331445 2567 state_mem.go:75] "Updated machine memory state" May 17 03:47:40.336654 kubelet[2567]: E0517 03:47:40.336626 2567 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 03:47:40.337278 kubelet[2567]: I0517 03:47:40.336911 2567 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 03:47:40.337278 kubelet[2567]: I0517 03:47:40.336930 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 03:47:40.337278 kubelet[2567]: I0517 03:47:40.337141 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 03:47:40.340825 kubelet[2567]: E0517 03:47:40.340792 2567 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 03:47:40.449274 kubelet[2567]: I0517 03:47:40.446838 2567 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.462172 kubelet[2567]: I0517 03:47:40.462119 2567 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.463805 kubelet[2567]: I0517 03:47:40.462621 2567 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.475577 kubelet[2567]: I0517 03:47:40.475488 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.476698 kubelet[2567]: I0517 03:47:40.475945 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.477444 kubelet[2567]: I0517 03:47:40.477187 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.493689 kubelet[2567]: I0517 03:47:40.492623 2567 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 03:47:40.493689 kubelet[2567]: I0517 03:47:40.492887 2567 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 03:47:40.498261 kubelet[2567]: I0517 03:47:40.498130 2567 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 03:47:40.573022 kubelet[2567]: I0517 03:47:40.572609 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a601e008b6f3d41db82375f6b9e77be-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"8a601e008b6f3d41db82375f6b9e77be\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.573022 kubelet[2567]: I0517 03:47:40.572686 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a601e008b6f3d41db82375f6b9e77be-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"8a601e008b6f3d41db82375f6b9e77be\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.573022 kubelet[2567]: I0517 03:47:40.572718 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/927185f9a2c38372efbe1bfdeb2d535d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"927185f9a2c38372efbe1bfdeb2d535d\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.573022 kubelet[2567]: I0517 03:47:40.572743 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/927185f9a2c38372efbe1bfdeb2d535d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"927185f9a2c38372efbe1bfdeb2d535d\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.573529 kubelet[2567]: I0517 03:47:40.572766 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a601e008b6f3d41db82375f6b9e77be-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"8a601e008b6f3d41db82375f6b9e77be\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.573529 kubelet[2567]: I0517 03:47:40.572786 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/927185f9a2c38372efbe1bfdeb2d535d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"927185f9a2c38372efbe1bfdeb2d535d\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.573529 kubelet[2567]: I0517 03:47:40.573328 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/927185f9a2c38372efbe1bfdeb2d535d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"927185f9a2c38372efbe1bfdeb2d535d\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.573529 kubelet[2567]: I0517 03:47:40.573351 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/927185f9a2c38372efbe1bfdeb2d535d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"927185f9a2c38372efbe1bfdeb2d535d\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:40.573529 kubelet[2567]: I0517 03:47:40.573384 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cedd7e8c8f87b652245624c450f39fdd-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" (UID: \"cedd7e8c8f87b652245624c450f39fdd\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:41.114615 kubelet[2567]: I0517 03:47:41.114175 2567 apiserver.go:52] "Watching apiserver" May 17 03:47:41.144101 kubelet[2567]: I0517 03:47:41.144023 2567 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 03:47:41.294076 kubelet[2567]: I0517 03:47:41.291620 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:41.312572 kubelet[2567]: I0517 03:47:41.312510 2567 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 03:47:41.312821 kubelet[2567]: E0517 03:47:41.312626 2567 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:47:41.338584 kubelet[2567]: I0517 03:47:41.337618 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-2f0bbd4ac2.novalocal" podStartSLOduration=1.337581427 podStartE2EDuration="1.337581427s" podCreationTimestamp="2025-05-17 03:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 03:47:41.337413515 +0000 UTC m=+1.302998789" watchObservedRunningTime="2025-05-17 03:47:41.337581427 +0000 UTC m=+1.303166701" May 17 03:47:41.359415 kubelet[2567]: I0517 03:47:41.359140 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-2f0bbd4ac2.novalocal" podStartSLOduration=1.359122541 podStartE2EDuration="1.359122541s" podCreationTimestamp="2025-05-17 03:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 03:47:41.347863056 +0000 UTC m=+1.313448330" watchObservedRunningTime="2025-05-17 03:47:41.359122541 +0000 UTC m=+1.324707815" May 17 03:47:41.377223 kubelet[2567]: I0517 03:47:41.376939 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2f0bbd4ac2.novalocal" podStartSLOduration=1.376924858 podStartE2EDuration="1.376924858s" podCreationTimestamp="2025-05-17 03:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 03:47:41.360370081 +0000 UTC m=+1.325955355" watchObservedRunningTime="2025-05-17 03:47:41.376924858 +0000 UTC m=+1.342510142" May 17 03:47:43.650687 kubelet[2567]: I0517 03:47:43.650609 2567 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 03:47:43.652439 containerd[1462]: time="2025-05-17T03:47:43.652346554Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 03:47:43.653758 kubelet[2567]: I0517 03:47:43.652787 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 03:47:44.517231 systemd[1]: Created slice kubepods-besteffort-pod059b292f_90ff_4d97_a182_547e119e7090.slice - libcontainer container kubepods-besteffort-pod059b292f_90ff_4d97_a182_547e119e7090.slice. May 17 03:47:44.598890 kubelet[2567]: I0517 03:47:44.598852 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/059b292f-90ff-4d97-a182-547e119e7090-xtables-lock\") pod \"kube-proxy-vdvtj\" (UID: \"059b292f-90ff-4d97-a182-547e119e7090\") " pod="kube-system/kube-proxy-vdvtj" May 17 03:47:44.599275 kubelet[2567]: I0517 03:47:44.599251 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfs8f\" (UniqueName: \"kubernetes.io/projected/059b292f-90ff-4d97-a182-547e119e7090-kube-api-access-bfs8f\") pod \"kube-proxy-vdvtj\" (UID: \"059b292f-90ff-4d97-a182-547e119e7090\") " pod="kube-system/kube-proxy-vdvtj" May 17 03:47:44.599512 kubelet[2567]: I0517 03:47:44.599470 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/059b292f-90ff-4d97-a182-547e119e7090-kube-proxy\") pod \"kube-proxy-vdvtj\" (UID: \"059b292f-90ff-4d97-a182-547e119e7090\") " pod="kube-system/kube-proxy-vdvtj" May 17 03:47:44.599572 kubelet[2567]: I0517 03:47:44.599525 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/059b292f-90ff-4d97-a182-547e119e7090-lib-modules\") pod \"kube-proxy-vdvtj\" (UID: \"059b292f-90ff-4d97-a182-547e119e7090\") " pod="kube-system/kube-proxy-vdvtj" May 17 03:47:44.632053 systemd[1]: Created slice kubepods-besteffort-pod6f9729c7_c4b9_490d_b73c_ac7863fbcf4b.slice - libcontainer container kubepods-besteffort-pod6f9729c7_c4b9_490d_b73c_ac7863fbcf4b.slice. May 17 03:47:44.701523 kubelet[2567]: I0517 03:47:44.700480 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6f9729c7-c4b9-490d-b73c-ac7863fbcf4b-var-lib-calico\") pod \"tigera-operator-844669ff44-dpdt8\" (UID: \"6f9729c7-c4b9-490d-b73c-ac7863fbcf4b\") " pod="tigera-operator/tigera-operator-844669ff44-dpdt8" May 17 03:47:44.701523 kubelet[2567]: I0517 03:47:44.700570 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd6zp\" (UniqueName: \"kubernetes.io/projected/6f9729c7-c4b9-490d-b73c-ac7863fbcf4b-kube-api-access-qd6zp\") pod \"tigera-operator-844669ff44-dpdt8\" (UID: \"6f9729c7-c4b9-490d-b73c-ac7863fbcf4b\") " pod="tigera-operator/tigera-operator-844669ff44-dpdt8" May 17 03:47:44.836086 containerd[1462]: time="2025-05-17T03:47:44.831892675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vdvtj,Uid:059b292f-90ff-4d97-a182-547e119e7090,Namespace:kube-system,Attempt:0,}" May 17 03:47:44.912342 containerd[1462]: time="2025-05-17T03:47:44.911712033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:47:44.912342 containerd[1462]: time="2025-05-17T03:47:44.911860806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:47:44.912342 containerd[1462]: time="2025-05-17T03:47:44.911907042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:44.913078 containerd[1462]: time="2025-05-17T03:47:44.912140136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:44.937144 containerd[1462]: time="2025-05-17T03:47:44.937063981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-dpdt8,Uid:6f9729c7-c4b9-490d-b73c-ac7863fbcf4b,Namespace:tigera-operator,Attempt:0,}" May 17 03:47:44.965398 systemd[1]: Started cri-containerd-688a132afd4a037d3eca4b1f206a7dd6fc10074970c22f8cd7daf09431f52637.scope - libcontainer container 688a132afd4a037d3eca4b1f206a7dd6fc10074970c22f8cd7daf09431f52637. May 17 03:47:44.999529 containerd[1462]: time="2025-05-17T03:47:44.999253534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:47:44.999529 containerd[1462]: time="2025-05-17T03:47:44.999337974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:47:44.999529 containerd[1462]: time="2025-05-17T03:47:44.999375426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:44.999529 containerd[1462]: time="2025-05-17T03:47:44.999464448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:45.008587 containerd[1462]: time="2025-05-17T03:47:45.008531596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vdvtj,Uid:059b292f-90ff-4d97-a182-547e119e7090,Namespace:kube-system,Attempt:0,} returns sandbox id \"688a132afd4a037d3eca4b1f206a7dd6fc10074970c22f8cd7daf09431f52637\"" May 17 03:47:45.025729 containerd[1462]: time="2025-05-17T03:47:45.025490329Z" level=info msg="CreateContainer within sandbox \"688a132afd4a037d3eca4b1f206a7dd6fc10074970c22f8cd7daf09431f52637\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 03:47:45.026570 systemd[1]: Started cri-containerd-0b78ff8219e9692f6e5f3d92926bb73f727adbf7fe4cd56aa433994d52e17378.scope - libcontainer container 0b78ff8219e9692f6e5f3d92926bb73f727adbf7fe4cd56aa433994d52e17378. May 17 03:47:45.060772 containerd[1462]: time="2025-05-17T03:47:45.060671460Z" level=info msg="CreateContainer within sandbox \"688a132afd4a037d3eca4b1f206a7dd6fc10074970c22f8cd7daf09431f52637\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c794e7356085dc44ea57b4b49c3e4516df2983d65aad9cbd1af5cd6c2d56d036\"" May 17 03:47:45.065456 containerd[1462]: time="2025-05-17T03:47:45.062264285Z" level=info msg="StartContainer for \"c794e7356085dc44ea57b4b49c3e4516df2983d65aad9cbd1af5cd6c2d56d036\"" May 17 03:47:45.084326 containerd[1462]: time="2025-05-17T03:47:45.084285240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-dpdt8,Uid:6f9729c7-c4b9-490d-b73c-ac7863fbcf4b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0b78ff8219e9692f6e5f3d92926bb73f727adbf7fe4cd56aa433994d52e17378\"" May 17 03:47:45.087552 containerd[1462]: time="2025-05-17T03:47:45.087462417Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 03:47:45.103367 systemd[1]: Started cri-containerd-c794e7356085dc44ea57b4b49c3e4516df2983d65aad9cbd1af5cd6c2d56d036.scope - libcontainer container c794e7356085dc44ea57b4b49c3e4516df2983d65aad9cbd1af5cd6c2d56d036. May 17 03:47:45.138366 containerd[1462]: time="2025-05-17T03:47:45.137909468Z" level=info msg="StartContainer for \"c794e7356085dc44ea57b4b49c3e4516df2983d65aad9cbd1af5cd6c2d56d036\" returns successfully" May 17 03:47:46.557152 kubelet[2567]: I0517 03:47:46.557020 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vdvtj" podStartSLOduration=2.556966251 podStartE2EDuration="2.556966251s" podCreationTimestamp="2025-05-17 03:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 03:47:45.331173722 +0000 UTC m=+5.296759026" watchObservedRunningTime="2025-05-17 03:47:46.556966251 +0000 UTC m=+6.522551585" May 17 03:47:47.458090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1451901003.mount: Deactivated successfully. May 17 03:47:48.178391 containerd[1462]: time="2025-05-17T03:47:48.178314486Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:48.179971 containerd[1462]: time="2025-05-17T03:47:48.179888805Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 03:47:48.181068 containerd[1462]: time="2025-05-17T03:47:48.181007442Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:48.184940 containerd[1462]: time="2025-05-17T03:47:48.184912151Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:47:48.186115 containerd[1462]: time="2025-05-17T03:47:48.185608218Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 3.097481087s" May 17 03:47:48.186115 containerd[1462]: time="2025-05-17T03:47:48.185743663Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 03:47:48.197845 containerd[1462]: time="2025-05-17T03:47:48.197729585Z" level=info msg="CreateContainer within sandbox \"0b78ff8219e9692f6e5f3d92926bb73f727adbf7fe4cd56aa433994d52e17378\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 03:47:48.223045 containerd[1462]: time="2025-05-17T03:47:48.222536269Z" level=info msg="CreateContainer within sandbox \"0b78ff8219e9692f6e5f3d92926bb73f727adbf7fe4cd56aa433994d52e17378\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"23cbed7df014c33b7a1ff4898906fa83872515201083599087b32922aad29584\"" May 17 03:47:48.223664 containerd[1462]: time="2025-05-17T03:47:48.223362980Z" level=info msg="StartContainer for \"23cbed7df014c33b7a1ff4898906fa83872515201083599087b32922aad29584\"" May 17 03:47:48.253375 systemd[1]: Started cri-containerd-23cbed7df014c33b7a1ff4898906fa83872515201083599087b32922aad29584.scope - libcontainer container 23cbed7df014c33b7a1ff4898906fa83872515201083599087b32922aad29584. May 17 03:47:48.282758 containerd[1462]: time="2025-05-17T03:47:48.282686619Z" level=info msg="StartContainer for \"23cbed7df014c33b7a1ff4898906fa83872515201083599087b32922aad29584\" returns successfully" May 17 03:47:55.590463 sudo[1711]: pam_unix(sudo:session): session closed for user root May 17 03:47:55.820411 sshd[1708]: pam_unix(sshd:session): session closed for user core May 17 03:47:55.825578 systemd[1]: sshd@8-172.24.4.46:22-172.24.4.1:53028.service: Deactivated successfully. May 17 03:47:55.828737 systemd[1]: session-11.scope: Deactivated successfully. May 17 03:47:55.829432 systemd[1]: session-11.scope: Consumed 7.983s CPU time, 162.6M memory peak, 0B memory swap peak. May 17 03:47:55.832560 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. May 17 03:47:55.835247 systemd-logind[1443]: Removed session 11. May 17 03:47:59.184531 kubelet[2567]: I0517 03:47:59.183812 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-dpdt8" podStartSLOduration=12.083449239 podStartE2EDuration="15.183794742s" podCreationTimestamp="2025-05-17 03:47:44 +0000 UTC" firstStartedPulling="2025-05-17 03:47:45.086814707 +0000 UTC m=+5.052399981" lastFinishedPulling="2025-05-17 03:47:48.18716021 +0000 UTC m=+8.152745484" observedRunningTime="2025-05-17 03:47:48.335639685 +0000 UTC m=+8.301224969" watchObservedRunningTime="2025-05-17 03:47:59.183794742 +0000 UTC m=+19.149380016" May 17 03:47:59.201831 systemd[1]: Created slice kubepods-besteffort-pod5fb291d8_9a4e_4029_8f6c_4eb1bc3d510c.slice - libcontainer container kubepods-besteffort-pod5fb291d8_9a4e_4029_8f6c_4eb1bc3d510c.slice. May 17 03:47:59.304736 kubelet[2567]: I0517 03:47:59.304671 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zvk5\" (UniqueName: \"kubernetes.io/projected/5fb291d8-9a4e-4029-8f6c-4eb1bc3d510c-kube-api-access-6zvk5\") pod \"calico-typha-5b8b5bbb65-xxh4s\" (UID: \"5fb291d8-9a4e-4029-8f6c-4eb1bc3d510c\") " pod="calico-system/calico-typha-5b8b5bbb65-xxh4s" May 17 03:47:59.304903 kubelet[2567]: I0517 03:47:59.304876 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb291d8-9a4e-4029-8f6c-4eb1bc3d510c-tigera-ca-bundle\") pod \"calico-typha-5b8b5bbb65-xxh4s\" (UID: \"5fb291d8-9a4e-4029-8f6c-4eb1bc3d510c\") " pod="calico-system/calico-typha-5b8b5bbb65-xxh4s" May 17 03:47:59.304992 kubelet[2567]: I0517 03:47:59.304907 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5fb291d8-9a4e-4029-8f6c-4eb1bc3d510c-typha-certs\") pod \"calico-typha-5b8b5bbb65-xxh4s\" (UID: \"5fb291d8-9a4e-4029-8f6c-4eb1bc3d510c\") " pod="calico-system/calico-typha-5b8b5bbb65-xxh4s" May 17 03:47:59.510088 containerd[1462]: time="2025-05-17T03:47:59.509927514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b8b5bbb65-xxh4s,Uid:5fb291d8-9a4e-4029-8f6c-4eb1bc3d510c,Namespace:calico-system,Attempt:0,}" May 17 03:47:59.582835 containerd[1462]: time="2025-05-17T03:47:59.575758502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:47:59.582835 containerd[1462]: time="2025-05-17T03:47:59.575951831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:47:59.582835 containerd[1462]: time="2025-05-17T03:47:59.575996933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:59.582835 containerd[1462]: time="2025-05-17T03:47:59.576164654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:47:59.621366 systemd[1]: Started cri-containerd-313e8a09a802429cd14cd8b87d374774125b748d053063db3fb08e33b88c4d9f.scope - libcontainer container 313e8a09a802429cd14cd8b87d374774125b748d053063db3fb08e33b88c4d9f. May 17 03:47:59.682504 systemd[1]: Created slice kubepods-besteffort-pod0ff37be3_efe0_4de1_a6b4_526b212c4d52.slice - libcontainer container kubepods-besteffort-pod0ff37be3_efe0_4de1_a6b4_526b212c4d52.slice. May 17 03:47:59.699781 containerd[1462]: time="2025-05-17T03:47:59.699718329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b8b5bbb65-xxh4s,Uid:5fb291d8-9a4e-4029-8f6c-4eb1bc3d510c,Namespace:calico-system,Attempt:0,} returns sandbox id \"313e8a09a802429cd14cd8b87d374774125b748d053063db3fb08e33b88c4d9f\"" May 17 03:47:59.701722 containerd[1462]: time="2025-05-17T03:47:59.701662085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 03:47:59.808524 kubelet[2567]: I0517 03:47:59.808264 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0ff37be3-efe0-4de1-a6b4-526b212c4d52-var-run-calico\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.808524 kubelet[2567]: I0517 03:47:59.808316 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0ff37be3-efe0-4de1-a6b4-526b212c4d52-policysync\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.808524 kubelet[2567]: I0517 03:47:59.808338 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ff37be3-efe0-4de1-a6b4-526b212c4d52-tigera-ca-bundle\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.808524 kubelet[2567]: I0517 03:47:59.808381 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ff37be3-efe0-4de1-a6b4-526b212c4d52-lib-modules\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.808524 kubelet[2567]: I0517 03:47:59.808402 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ff37be3-efe0-4de1-a6b4-526b212c4d52-xtables-lock\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.810872 kubelet[2567]: I0517 03:47:59.810629 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tbzh\" (UniqueName: \"kubernetes.io/projected/0ff37be3-efe0-4de1-a6b4-526b212c4d52-kube-api-access-8tbzh\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.810872 kubelet[2567]: I0517 03:47:59.810665 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0ff37be3-efe0-4de1-a6b4-526b212c4d52-cni-bin-dir\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.810872 kubelet[2567]: I0517 03:47:59.810684 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0ff37be3-efe0-4de1-a6b4-526b212c4d52-cni-log-dir\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.810872 kubelet[2567]: I0517 03:47:59.810704 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0ff37be3-efe0-4de1-a6b4-526b212c4d52-cni-net-dir\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.810872 kubelet[2567]: I0517 03:47:59.810724 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0ff37be3-efe0-4de1-a6b4-526b212c4d52-node-certs\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.811070 kubelet[2567]: I0517 03:47:59.810743 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0ff37be3-efe0-4de1-a6b4-526b212c4d52-flexvol-driver-host\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.811070 kubelet[2567]: I0517 03:47:59.810760 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ff37be3-efe0-4de1-a6b4-526b212c4d52-var-lib-calico\") pod \"calico-node-qts46\" (UID: \"0ff37be3-efe0-4de1-a6b4-526b212c4d52\") " pod="calico-system/calico-node-qts46" May 17 03:47:59.914948 kubelet[2567]: E0517 03:47:59.914890 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:47:59.914948 kubelet[2567]: W0517 03:47:59.914913 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:47:59.914948 kubelet[2567]: E0517 03:47:59.914954 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:47:59.930568 kubelet[2567]: E0517 03:47:59.930541 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:47:59.930568 kubelet[2567]: W0517 03:47:59.930562 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:47:59.930568 kubelet[2567]: E0517 03:47:59.930580 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:47:59.986454 containerd[1462]: time="2025-05-17T03:47:59.985790857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qts46,Uid:0ff37be3-efe0-4de1-a6b4-526b212c4d52,Namespace:calico-system,Attempt:0,}" May 17 03:48:00.075698 kubelet[2567]: E0517 03:48:00.074894 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw9vx" podUID="ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134" May 17 03:48:00.101279 containerd[1462]: time="2025-05-17T03:48:00.100854584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:48:00.102376 containerd[1462]: time="2025-05-17T03:48:00.101400384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:48:00.104110 containerd[1462]: time="2025-05-17T03:48:00.102400359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:00.104110 containerd[1462]: time="2025-05-17T03:48:00.102525742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:00.136486 systemd[1]: Started cri-containerd-fa61e0ebc5ead2284243cbceba5f0d1b51f3d63709f0039ff6a71561dc0edefe.scope - libcontainer container fa61e0ebc5ead2284243cbceba5f0d1b51f3d63709f0039ff6a71561dc0edefe. May 17 03:48:00.171452 kubelet[2567]: E0517 03:48:00.171419 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.171452 kubelet[2567]: W0517 03:48:00.171442 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.171818 kubelet[2567]: E0517 03:48:00.171466 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.171818 kubelet[2567]: E0517 03:48:00.171621 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.171818 kubelet[2567]: W0517 03:48:00.171629 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.171818 kubelet[2567]: E0517 03:48:00.171639 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.172273 kubelet[2567]: E0517 03:48:00.172244 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.172273 kubelet[2567]: W0517 03:48:00.172257 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.172273 kubelet[2567]: E0517 03:48:00.172267 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.172543 kubelet[2567]: E0517 03:48:00.172453 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.172543 kubelet[2567]: W0517 03:48:00.172462 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.172543 kubelet[2567]: E0517 03:48:00.172470 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.173008 kubelet[2567]: E0517 03:48:00.172621 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.173008 kubelet[2567]: W0517 03:48:00.172630 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.173008 kubelet[2567]: E0517 03:48:00.172638 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.173008 kubelet[2567]: E0517 03:48:00.172765 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.173008 kubelet[2567]: W0517 03:48:00.172789 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.173008 kubelet[2567]: E0517 03:48:00.172798 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.173541 kubelet[2567]: E0517 03:48:00.173501 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.173541 kubelet[2567]: W0517 03:48:00.173514 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.173541 kubelet[2567]: E0517 03:48:00.173524 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.173824 kubelet[2567]: E0517 03:48:00.173666 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.173824 kubelet[2567]: W0517 03:48:00.173675 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.173824 kubelet[2567]: E0517 03:48:00.173683 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.174307 kubelet[2567]: E0517 03:48:00.173869 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.174307 kubelet[2567]: W0517 03:48:00.173878 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.174307 kubelet[2567]: E0517 03:48:00.173886 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.174307 kubelet[2567]: E0517 03:48:00.174044 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.174307 kubelet[2567]: W0517 03:48:00.174053 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.174307 kubelet[2567]: E0517 03:48:00.174072 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.174307 kubelet[2567]: E0517 03:48:00.174254 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.174307 kubelet[2567]: W0517 03:48:00.174264 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.174307 kubelet[2567]: E0517 03:48:00.174274 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.174810 kubelet[2567]: E0517 03:48:00.174435 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.174810 kubelet[2567]: W0517 03:48:00.174447 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.174810 kubelet[2567]: E0517 03:48:00.174456 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.174810 kubelet[2567]: E0517 03:48:00.174605 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.174810 kubelet[2567]: W0517 03:48:00.174613 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.174810 kubelet[2567]: E0517 03:48:00.174622 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.174810 kubelet[2567]: E0517 03:48:00.174746 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.174810 kubelet[2567]: W0517 03:48:00.174757 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.174810 kubelet[2567]: E0517 03:48:00.174765 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.175124 kubelet[2567]: E0517 03:48:00.174895 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.175124 kubelet[2567]: W0517 03:48:00.174904 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.175124 kubelet[2567]: E0517 03:48:00.174913 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.175124 kubelet[2567]: E0517 03:48:00.175065 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.175124 kubelet[2567]: W0517 03:48:00.175073 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.175124 kubelet[2567]: E0517 03:48:00.175082 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.175289 kubelet[2567]: E0517 03:48:00.175262 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.175289 kubelet[2567]: W0517 03:48:00.175271 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.175289 kubelet[2567]: E0517 03:48:00.175281 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.176334 kubelet[2567]: E0517 03:48:00.175406 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.176334 kubelet[2567]: W0517 03:48:00.175420 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.176334 kubelet[2567]: E0517 03:48:00.175429 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.176334 kubelet[2567]: E0517 03:48:00.175558 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.176334 kubelet[2567]: W0517 03:48:00.175566 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.176334 kubelet[2567]: E0517 03:48:00.175574 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.176334 kubelet[2567]: E0517 03:48:00.175751 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.176334 kubelet[2567]: W0517 03:48:00.175786 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.176334 kubelet[2567]: E0517 03:48:00.175797 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.199083 containerd[1462]: time="2025-05-17T03:48:00.199024384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qts46,Uid:0ff37be3-efe0-4de1-a6b4-526b212c4d52,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa61e0ebc5ead2284243cbceba5f0d1b51f3d63709f0039ff6a71561dc0edefe\"" May 17 03:48:00.211943 kubelet[2567]: E0517 03:48:00.211906 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.211943 kubelet[2567]: W0517 03:48:00.211928 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.211943 kubelet[2567]: E0517 03:48:00.211951 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.212843 kubelet[2567]: I0517 03:48:00.211980 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134-registration-dir\") pod \"csi-node-driver-kw9vx\" (UID: \"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134\") " pod="calico-system/csi-node-driver-kw9vx" May 17 03:48:00.212843 kubelet[2567]: E0517 03:48:00.212292 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.212843 kubelet[2567]: W0517 03:48:00.212340 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.212843 kubelet[2567]: E0517 03:48:00.212351 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.212843 kubelet[2567]: I0517 03:48:00.212379 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdpw\" (UniqueName: \"kubernetes.io/projected/ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134-kube-api-access-tvdpw\") pod \"csi-node-driver-kw9vx\" (UID: \"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134\") " pod="calico-system/csi-node-driver-kw9vx" May 17 03:48:00.212843 kubelet[2567]: E0517 03:48:00.212605 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.212843 kubelet[2567]: W0517 03:48:00.212633 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.212843 kubelet[2567]: E0517 03:48:00.212679 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.213424 kubelet[2567]: E0517 03:48:00.213162 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.213424 kubelet[2567]: W0517 03:48:00.213172 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.213424 kubelet[2567]: E0517 03:48:00.213183 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.213526 kubelet[2567]: E0517 03:48:00.213430 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.213526 kubelet[2567]: W0517 03:48:00.213442 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.213526 kubelet[2567]: E0517 03:48:00.213451 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.213526 kubelet[2567]: I0517 03:48:00.213487 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134-socket-dir\") pod \"csi-node-driver-kw9vx\" (UID: \"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134\") " pod="calico-system/csi-node-driver-kw9vx" May 17 03:48:00.213764 kubelet[2567]: E0517 03:48:00.213748 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.213764 kubelet[2567]: W0517 03:48:00.213763 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.213829 kubelet[2567]: E0517 03:48:00.213773 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.213855 kubelet[2567]: I0517 03:48:00.213826 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134-kubelet-dir\") pod \"csi-node-driver-kw9vx\" (UID: \"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134\") " pod="calico-system/csi-node-driver-kw9vx" May 17 03:48:00.214014 kubelet[2567]: E0517 03:48:00.214001 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.214053 kubelet[2567]: W0517 03:48:00.214031 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.214053 kubelet[2567]: E0517 03:48:00.214042 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.214250 kubelet[2567]: E0517 03:48:00.214236 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.214250 kubelet[2567]: W0517 03:48:00.214248 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.214324 kubelet[2567]: E0517 03:48:00.214256 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.214464 kubelet[2567]: E0517 03:48:00.214451 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.214464 kubelet[2567]: W0517 03:48:00.214463 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.214522 kubelet[2567]: E0517 03:48:00.214472 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.214522 kubelet[2567]: I0517 03:48:00.214505 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134-varrun\") pod \"csi-node-driver-kw9vx\" (UID: \"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134\") " pod="calico-system/csi-node-driver-kw9vx" May 17 03:48:00.214691 kubelet[2567]: E0517 03:48:00.214677 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.214691 kubelet[2567]: W0517 03:48:00.214689 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.214756 kubelet[2567]: E0517 03:48:00.214699 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.214927 kubelet[2567]: E0517 03:48:00.214870 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.214927 kubelet[2567]: W0517 03:48:00.214879 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.214927 kubelet[2567]: E0517 03:48:00.214892 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.215149 kubelet[2567]: E0517 03:48:00.215133 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.215149 kubelet[2567]: W0517 03:48:00.215146 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.215244 kubelet[2567]: E0517 03:48:00.215156 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.215439 kubelet[2567]: E0517 03:48:00.215423 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.215439 kubelet[2567]: W0517 03:48:00.215437 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.215931 kubelet[2567]: E0517 03:48:00.215448 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.216169 kubelet[2567]: E0517 03:48:00.216143 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.216169 kubelet[2567]: W0517 03:48:00.216168 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.216291 kubelet[2567]: E0517 03:48:00.216179 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.216437 kubelet[2567]: E0517 03:48:00.216423 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.216437 kubelet[2567]: W0517 03:48:00.216435 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.216516 kubelet[2567]: E0517 03:48:00.216467 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.316285 kubelet[2567]: E0517 03:48:00.316104 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.316285 kubelet[2567]: W0517 03:48:00.316243 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.316285 kubelet[2567]: E0517 03:48:00.316276 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.317012 kubelet[2567]: E0517 03:48:00.316982 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.317012 kubelet[2567]: W0517 03:48:00.317009 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.317188 kubelet[2567]: E0517 03:48:00.317067 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.317620 kubelet[2567]: E0517 03:48:00.317588 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.317720 kubelet[2567]: W0517 03:48:00.317639 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.317720 kubelet[2567]: E0517 03:48:00.317662 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.318091 kubelet[2567]: E0517 03:48:00.318062 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.318192 kubelet[2567]: W0517 03:48:00.318110 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.318192 kubelet[2567]: E0517 03:48:00.318133 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.318644 kubelet[2567]: E0517 03:48:00.318630 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.318730 kubelet[2567]: W0517 03:48:00.318650 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.318730 kubelet[2567]: E0517 03:48:00.318673 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.319308 kubelet[2567]: E0517 03:48:00.319276 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.319308 kubelet[2567]: W0517 03:48:00.319306 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.319493 kubelet[2567]: E0517 03:48:00.319328 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.319768 kubelet[2567]: E0517 03:48:00.319739 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.319768 kubelet[2567]: W0517 03:48:00.319765 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.319917 kubelet[2567]: E0517 03:48:00.319785 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.320327 kubelet[2567]: E0517 03:48:00.320297 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.320327 kubelet[2567]: W0517 03:48:00.320323 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.320515 kubelet[2567]: E0517 03:48:00.320344 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.320765 kubelet[2567]: E0517 03:48:00.320736 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.320765 kubelet[2567]: W0517 03:48:00.320763 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.320923 kubelet[2567]: E0517 03:48:00.320814 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.321398 kubelet[2567]: E0517 03:48:00.321366 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.321398 kubelet[2567]: W0517 03:48:00.321395 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.321572 kubelet[2567]: E0517 03:48:00.321417 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.321780 kubelet[2567]: E0517 03:48:00.321712 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.321780 kubelet[2567]: W0517 03:48:00.321742 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.321780 kubelet[2567]: E0517 03:48:00.321761 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.322147 kubelet[2567]: E0517 03:48:00.322121 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.322147 kubelet[2567]: W0517 03:48:00.322145 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.322344 kubelet[2567]: E0517 03:48:00.322164 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.322608 kubelet[2567]: E0517 03:48:00.322580 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.322608 kubelet[2567]: W0517 03:48:00.322605 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.322763 kubelet[2567]: E0517 03:48:00.322625 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.323016 kubelet[2567]: E0517 03:48:00.322989 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.323016 kubelet[2567]: W0517 03:48:00.323013 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.323185 kubelet[2567]: E0517 03:48:00.323067 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.323585 kubelet[2567]: E0517 03:48:00.323556 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.323585 kubelet[2567]: W0517 03:48:00.323583 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.323754 kubelet[2567]: E0517 03:48:00.323604 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.324685 kubelet[2567]: E0517 03:48:00.324410 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.324685 kubelet[2567]: W0517 03:48:00.324439 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.324685 kubelet[2567]: E0517 03:48:00.324462 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.325544 kubelet[2567]: E0517 03:48:00.325094 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.325544 kubelet[2567]: W0517 03:48:00.325115 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.325544 kubelet[2567]: E0517 03:48:00.325136 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.328232 kubelet[2567]: E0517 03:48:00.327843 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.328232 kubelet[2567]: W0517 03:48:00.327885 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.328232 kubelet[2567]: E0517 03:48:00.327924 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.329896 kubelet[2567]: E0517 03:48:00.328748 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.329896 kubelet[2567]: W0517 03:48:00.328776 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.329896 kubelet[2567]: E0517 03:48:00.328800 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.331311 kubelet[2567]: E0517 03:48:00.330699 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.331311 kubelet[2567]: W0517 03:48:00.330729 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.331311 kubelet[2567]: E0517 03:48:00.330753 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.335404 kubelet[2567]: E0517 03:48:00.334461 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.335404 kubelet[2567]: W0517 03:48:00.334535 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.335404 kubelet[2567]: E0517 03:48:00.334562 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.339432 kubelet[2567]: E0517 03:48:00.339022 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.339432 kubelet[2567]: W0517 03:48:00.339054 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.339432 kubelet[2567]: E0517 03:48:00.339105 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.344034 kubelet[2567]: E0517 03:48:00.343947 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.345531 kubelet[2567]: W0517 03:48:00.344291 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.345531 kubelet[2567]: E0517 03:48:00.344332 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.348321 kubelet[2567]: E0517 03:48:00.348289 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.350026 kubelet[2567]: W0517 03:48:00.349017 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.350026 kubelet[2567]: E0517 03:48:00.349176 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.352378 kubelet[2567]: E0517 03:48:00.352348 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.352537 kubelet[2567]: W0517 03:48:00.352511 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.352716 kubelet[2567]: E0517 03:48:00.352657 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.365163 kubelet[2567]: E0517 03:48:00.365030 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:00.365163 kubelet[2567]: W0517 03:48:00.365068 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:00.365163 kubelet[2567]: E0517 03:48:00.365100 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:00.450038 systemd[1]: run-containerd-runc-k8s.io-313e8a09a802429cd14cd8b87d374774125b748d053063db3fb08e33b88c4d9f-runc.tyOZ6S.mount: Deactivated successfully. May 17 03:48:02.097233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598119747.mount: Deactivated successfully. May 17 03:48:02.152152 kubelet[2567]: E0517 03:48:02.150808 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw9vx" podUID="ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134" May 17 03:48:03.188278 containerd[1462]: time="2025-05-17T03:48:03.188221794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:03.189765 containerd[1462]: time="2025-05-17T03:48:03.189612978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 03:48:03.192157 containerd[1462]: time="2025-05-17T03:48:03.190941714Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:03.194080 containerd[1462]: time="2025-05-17T03:48:03.193324940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:03.194080 containerd[1462]: time="2025-05-17T03:48:03.193975748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 3.492248015s" May 17 03:48:03.194080 containerd[1462]: time="2025-05-17T03:48:03.194011608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 03:48:03.195353 containerd[1462]: time="2025-05-17T03:48:03.195329439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 03:48:03.216685 containerd[1462]: time="2025-05-17T03:48:03.216640651Z" level=info msg="CreateContainer within sandbox \"313e8a09a802429cd14cd8b87d374774125b748d053063db3fb08e33b88c4d9f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 03:48:03.239158 containerd[1462]: time="2025-05-17T03:48:03.238990198Z" level=info msg="CreateContainer within sandbox \"313e8a09a802429cd14cd8b87d374774125b748d053063db3fb08e33b88c4d9f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a2e00d5494904fa84775fdd405027277bee72ed5fd7a50394c02461740257674\"" May 17 03:48:03.239901 containerd[1462]: time="2025-05-17T03:48:03.239876887Z" level=info msg="StartContainer for \"a2e00d5494904fa84775fdd405027277bee72ed5fd7a50394c02461740257674\"" May 17 03:48:03.289435 systemd[1]: Started cri-containerd-a2e00d5494904fa84775fdd405027277bee72ed5fd7a50394c02461740257674.scope - libcontainer container a2e00d5494904fa84775fdd405027277bee72ed5fd7a50394c02461740257674. May 17 03:48:03.350006 containerd[1462]: time="2025-05-17T03:48:03.349959902Z" level=info msg="StartContainer for \"a2e00d5494904fa84775fdd405027277bee72ed5fd7a50394c02461740257674\" returns successfully" May 17 03:48:03.399678 kubelet[2567]: I0517 03:48:03.399587 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b8b5bbb65-xxh4s" podStartSLOduration=0.90548666 podStartE2EDuration="4.399564123s" podCreationTimestamp="2025-05-17 03:47:59 +0000 UTC" firstStartedPulling="2025-05-17 03:47:59.700971587 +0000 UTC m=+19.666556861" lastFinishedPulling="2025-05-17 03:48:03.19504905 +0000 UTC m=+23.160634324" observedRunningTime="2025-05-17 03:48:03.399173229 +0000 UTC m=+23.364758503" watchObservedRunningTime="2025-05-17 03:48:03.399564123 +0000 UTC m=+23.365149397" May 17 03:48:03.403616 kubelet[2567]: E0517 03:48:03.403243 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.403616 kubelet[2567]: W0517 03:48:03.403333 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.403616 kubelet[2567]: E0517 03:48:03.403354 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.403743 kubelet[2567]: E0517 03:48:03.403660 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.403743 kubelet[2567]: W0517 03:48:03.403670 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.403743 kubelet[2567]: E0517 03:48:03.403680 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.403958 kubelet[2567]: E0517 03:48:03.403939 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.404228 kubelet[2567]: W0517 03:48:03.404007 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.404228 kubelet[2567]: E0517 03:48:03.404025 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.404354 kubelet[2567]: E0517 03:48:03.404335 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.404354 kubelet[2567]: W0517 03:48:03.404349 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.404523 kubelet[2567]: E0517 03:48:03.404359 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.404557 kubelet[2567]: E0517 03:48:03.404542 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.404557 kubelet[2567]: W0517 03:48:03.404552 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.404605 kubelet[2567]: E0517 03:48:03.404560 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.404904 kubelet[2567]: E0517 03:48:03.404884 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.404904 kubelet[2567]: W0517 03:48:03.404898 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.405070 kubelet[2567]: E0517 03:48:03.404908 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.405335 kubelet[2567]: E0517 03:48:03.405313 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.405335 kubelet[2567]: W0517 03:48:03.405327 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.405405 kubelet[2567]: E0517 03:48:03.405336 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.405696 kubelet[2567]: E0517 03:48:03.405516 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.405696 kubelet[2567]: W0517 03:48:03.405529 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.405696 kubelet[2567]: E0517 03:48:03.405537 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.405789 kubelet[2567]: E0517 03:48:03.405708 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.405789 kubelet[2567]: W0517 03:48:03.405717 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.405789 kubelet[2567]: E0517 03:48:03.405727 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.406117 kubelet[2567]: E0517 03:48:03.405889 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.406117 kubelet[2567]: W0517 03:48:03.405901 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.406117 kubelet[2567]: E0517 03:48:03.405910 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.406117 kubelet[2567]: E0517 03:48:03.406078 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.406117 kubelet[2567]: W0517 03:48:03.406086 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.406117 kubelet[2567]: E0517 03:48:03.406095 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.407092 kubelet[2567]: E0517 03:48:03.406288 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.407092 kubelet[2567]: W0517 03:48:03.406315 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.407092 kubelet[2567]: E0517 03:48:03.406325 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.407092 kubelet[2567]: E0517 03:48:03.406521 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.407092 kubelet[2567]: W0517 03:48:03.406530 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.407092 kubelet[2567]: E0517 03:48:03.406538 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.407092 kubelet[2567]: E0517 03:48:03.406734 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.407092 kubelet[2567]: W0517 03:48:03.406743 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.407092 kubelet[2567]: E0517 03:48:03.406752 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.407092 kubelet[2567]: E0517 03:48:03.406918 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.407756 kubelet[2567]: W0517 03:48:03.406926 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.407756 kubelet[2567]: E0517 03:48:03.406935 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.449659 kubelet[2567]: E0517 03:48:03.449472 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.449659 kubelet[2567]: W0517 03:48:03.449499 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.449659 kubelet[2567]: E0517 03:48:03.449545 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.451783 kubelet[2567]: E0517 03:48:03.449997 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.451783 kubelet[2567]: W0517 03:48:03.450165 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.451783 kubelet[2567]: E0517 03:48:03.450179 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.451783 kubelet[2567]: E0517 03:48:03.451077 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.451783 kubelet[2567]: W0517 03:48:03.451093 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.451783 kubelet[2567]: E0517 03:48:03.451102 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.451783 kubelet[2567]: E0517 03:48:03.451585 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.451783 kubelet[2567]: W0517 03:48:03.451602 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.451783 kubelet[2567]: E0517 03:48:03.451634 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.452237 kubelet[2567]: E0517 03:48:03.452216 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.452237 kubelet[2567]: W0517 03:48:03.452233 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.452315 kubelet[2567]: E0517 03:48:03.452243 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.452460 kubelet[2567]: E0517 03:48:03.452436 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.452460 kubelet[2567]: W0517 03:48:03.452450 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.452460 kubelet[2567]: E0517 03:48:03.452459 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.453228 kubelet[2567]: E0517 03:48:03.452768 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.453228 kubelet[2567]: W0517 03:48:03.452782 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.453228 kubelet[2567]: E0517 03:48:03.452792 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.453326 kubelet[2567]: E0517 03:48:03.453260 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.453326 kubelet[2567]: W0517 03:48:03.453270 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.453326 kubelet[2567]: E0517 03:48:03.453280 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.453593 kubelet[2567]: E0517 03:48:03.453569 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.453593 kubelet[2567]: W0517 03:48:03.453583 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.453593 kubelet[2567]: E0517 03:48:03.453594 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.454363 kubelet[2567]: E0517 03:48:03.454339 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.454363 kubelet[2567]: W0517 03:48:03.454354 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.454363 kubelet[2567]: E0517 03:48:03.454364 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.454824 kubelet[2567]: E0517 03:48:03.454803 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.454824 kubelet[2567]: W0517 03:48:03.454819 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.454914 kubelet[2567]: E0517 03:48:03.454828 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.455079 kubelet[2567]: E0517 03:48:03.455056 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.455079 kubelet[2567]: W0517 03:48:03.455070 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.455145 kubelet[2567]: E0517 03:48:03.455081 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.455298 kubelet[2567]: E0517 03:48:03.455277 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.455298 kubelet[2567]: W0517 03:48:03.455292 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.455370 kubelet[2567]: E0517 03:48:03.455301 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.456247 kubelet[2567]: E0517 03:48:03.455478 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.456247 kubelet[2567]: W0517 03:48:03.455491 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.456247 kubelet[2567]: E0517 03:48:03.455500 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.456247 kubelet[2567]: E0517 03:48:03.455786 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.456247 kubelet[2567]: W0517 03:48:03.455796 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.456247 kubelet[2567]: E0517 03:48:03.455805 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.456557 kubelet[2567]: E0517 03:48:03.456533 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.456557 kubelet[2567]: W0517 03:48:03.456549 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.456621 kubelet[2567]: E0517 03:48:03.456561 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.456923 kubelet[2567]: E0517 03:48:03.456898 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.456923 kubelet[2567]: W0517 03:48:03.456913 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.456923 kubelet[2567]: E0517 03:48:03.456922 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:03.457126 kubelet[2567]: E0517 03:48:03.457104 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:03.457126 kubelet[2567]: W0517 03:48:03.457118 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:03.457126 kubelet[2567]: E0517 03:48:03.457127 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.150273 kubelet[2567]: E0517 03:48:04.148977 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw9vx" podUID="ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134" May 17 03:48:04.376735 kubelet[2567]: I0517 03:48:04.376680 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 03:48:04.412590 kubelet[2567]: E0517 03:48:04.412459 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.413443 kubelet[2567]: W0517 03:48:04.413315 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.413443 kubelet[2567]: E0517 03:48:04.413394 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.414107 kubelet[2567]: E0517 03:48:04.414073 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.414306 kubelet[2567]: W0517 03:48:04.414111 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.414306 kubelet[2567]: E0517 03:48:04.414137 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.414580 kubelet[2567]: E0517 03:48:04.414515 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.414580 kubelet[2567]: W0517 03:48:04.414537 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.414580 kubelet[2567]: E0517 03:48:04.414557 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.414910 kubelet[2567]: E0517 03:48:04.414878 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.414910 kubelet[2567]: W0517 03:48:04.414899 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.415102 kubelet[2567]: E0517 03:48:04.414919 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.415360 kubelet[2567]: E0517 03:48:04.415330 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.415360 kubelet[2567]: W0517 03:48:04.415358 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.415625 kubelet[2567]: E0517 03:48:04.415380 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.415793 kubelet[2567]: E0517 03:48:04.415695 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.415793 kubelet[2567]: W0517 03:48:04.415715 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.415793 kubelet[2567]: E0517 03:48:04.415736 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.416115 kubelet[2567]: E0517 03:48:04.416049 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.416115 kubelet[2567]: W0517 03:48:04.416069 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.416115 kubelet[2567]: E0517 03:48:04.416089 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.416563 kubelet[2567]: E0517 03:48:04.416479 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.416563 kubelet[2567]: W0517 03:48:04.416499 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.416563 kubelet[2567]: E0517 03:48:04.416556 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.416933 kubelet[2567]: E0517 03:48:04.416901 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.416933 kubelet[2567]: W0517 03:48:04.416929 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.417164 kubelet[2567]: E0517 03:48:04.416949 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.417381 kubelet[2567]: E0517 03:48:04.417323 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.417381 kubelet[2567]: W0517 03:48:04.417350 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.417381 kubelet[2567]: E0517 03:48:04.417370 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.417729 kubelet[2567]: E0517 03:48:04.417693 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.417729 kubelet[2567]: W0517 03:48:04.417713 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.417729 kubelet[2567]: E0517 03:48:04.417733 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.418076 kubelet[2567]: E0517 03:48:04.418048 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.418076 kubelet[2567]: W0517 03:48:04.418069 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.418236 kubelet[2567]: E0517 03:48:04.418090 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.418498 kubelet[2567]: E0517 03:48:04.418470 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.418498 kubelet[2567]: W0517 03:48:04.418497 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.418642 kubelet[2567]: E0517 03:48:04.418518 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.418878 kubelet[2567]: E0517 03:48:04.418842 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.418878 kubelet[2567]: W0517 03:48:04.418870 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.419019 kubelet[2567]: E0517 03:48:04.418890 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.419307 kubelet[2567]: E0517 03:48:04.419276 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.419307 kubelet[2567]: W0517 03:48:04.419304 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.419452 kubelet[2567]: E0517 03:48:04.419326 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.461647 kubelet[2567]: E0517 03:48:04.461480 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.461647 kubelet[2567]: W0517 03:48:04.461517 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.461647 kubelet[2567]: E0517 03:48:04.461546 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.462421 kubelet[2567]: E0517 03:48:04.461946 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.462421 kubelet[2567]: W0517 03:48:04.461966 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.462421 kubelet[2567]: E0517 03:48:04.461985 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.463080 kubelet[2567]: E0517 03:48:04.462900 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.463080 kubelet[2567]: W0517 03:48:04.462932 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.463080 kubelet[2567]: E0517 03:48:04.462962 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.464179 kubelet[2567]: E0517 03:48:04.463987 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.464179 kubelet[2567]: W0517 03:48:04.464063 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.464179 kubelet[2567]: E0517 03:48:04.464087 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.464719 kubelet[2567]: E0517 03:48:04.464662 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.464822 kubelet[2567]: W0517 03:48:04.464749 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.464822 kubelet[2567]: E0517 03:48:04.464776 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.465429 kubelet[2567]: E0517 03:48:04.465394 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.465429 kubelet[2567]: W0517 03:48:04.465425 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.465607 kubelet[2567]: E0517 03:48:04.465448 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.465882 kubelet[2567]: E0517 03:48:04.465851 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.465882 kubelet[2567]: W0517 03:48:04.465879 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.466038 kubelet[2567]: E0517 03:48:04.465903 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.466434 kubelet[2567]: E0517 03:48:04.466402 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.466434 kubelet[2567]: W0517 03:48:04.466431 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.466596 kubelet[2567]: E0517 03:48:04.466452 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.466923 kubelet[2567]: E0517 03:48:04.466872 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.466923 kubelet[2567]: W0517 03:48:04.466905 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.467074 kubelet[2567]: E0517 03:48:04.466926 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.467397 kubelet[2567]: E0517 03:48:04.467366 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.467397 kubelet[2567]: W0517 03:48:04.467395 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.467541 kubelet[2567]: E0517 03:48:04.467418 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.468068 kubelet[2567]: E0517 03:48:04.468036 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.468068 kubelet[2567]: W0517 03:48:04.468066 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.468258 kubelet[2567]: E0517 03:48:04.468089 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.469075 kubelet[2567]: E0517 03:48:04.469021 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.469075 kubelet[2567]: W0517 03:48:04.469059 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.469284 kubelet[2567]: E0517 03:48:04.469081 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.469589 kubelet[2567]: E0517 03:48:04.469558 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.469589 kubelet[2567]: W0517 03:48:04.469587 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.469736 kubelet[2567]: E0517 03:48:04.469609 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.470412 kubelet[2567]: E0517 03:48:04.470359 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.470412 kubelet[2567]: W0517 03:48:04.470387 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.470412 kubelet[2567]: E0517 03:48:04.470408 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.470847 kubelet[2567]: E0517 03:48:04.470792 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.470847 kubelet[2567]: W0517 03:48:04.470831 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.471071 kubelet[2567]: E0517 03:48:04.470852 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.471275 kubelet[2567]: E0517 03:48:04.471244 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.471370 kubelet[2567]: W0517 03:48:04.471276 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.471370 kubelet[2567]: E0517 03:48:04.471298 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.471749 kubelet[2567]: E0517 03:48:04.471718 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.471749 kubelet[2567]: W0517 03:48:04.471748 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.471887 kubelet[2567]: E0517 03:48:04.471770 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:04.472683 kubelet[2567]: E0517 03:48:04.472621 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 03:48:04.472683 kubelet[2567]: W0517 03:48:04.472654 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 03:48:04.472683 kubelet[2567]: E0517 03:48:04.472676 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 03:48:05.286444 containerd[1462]: time="2025-05-17T03:48:05.286178002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:05.288163 containerd[1462]: time="2025-05-17T03:48:05.287883658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 03:48:05.289594 containerd[1462]: time="2025-05-17T03:48:05.289509970Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:05.292375 containerd[1462]: time="2025-05-17T03:48:05.292332354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:05.293592 containerd[1462]: time="2025-05-17T03:48:05.293053730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 2.097693173s" May 17 03:48:05.293592 containerd[1462]: time="2025-05-17T03:48:05.293101665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 03:48:05.300949 containerd[1462]: time="2025-05-17T03:48:05.300909639Z" level=info msg="CreateContainer within sandbox \"fa61e0ebc5ead2284243cbceba5f0d1b51f3d63709f0039ff6a71561dc0edefe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 03:48:05.323325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551562246.mount: Deactivated successfully. May 17 03:48:05.329526 containerd[1462]: time="2025-05-17T03:48:05.329385208Z" level=info msg="CreateContainer within sandbox \"fa61e0ebc5ead2284243cbceba5f0d1b51f3d63709f0039ff6a71561dc0edefe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fe265f14e460c9efcf03a16b732cff0a6f91026639076e5111b79df636fa2fb3\"" May 17 03:48:05.330359 containerd[1462]: time="2025-05-17T03:48:05.330315340Z" level=info msg="StartContainer for \"fe265f14e460c9efcf03a16b732cff0a6f91026639076e5111b79df636fa2fb3\"" May 17 03:48:05.375353 systemd[1]: Started cri-containerd-fe265f14e460c9efcf03a16b732cff0a6f91026639076e5111b79df636fa2fb3.scope - libcontainer container fe265f14e460c9efcf03a16b732cff0a6f91026639076e5111b79df636fa2fb3. May 17 03:48:05.416487 containerd[1462]: time="2025-05-17T03:48:05.416440980Z" level=info msg="StartContainer for \"fe265f14e460c9efcf03a16b732cff0a6f91026639076e5111b79df636fa2fb3\" returns successfully" May 17 03:48:05.423468 systemd[1]: cri-containerd-fe265f14e460c9efcf03a16b732cff0a6f91026639076e5111b79df636fa2fb3.scope: Deactivated successfully. May 17 03:48:05.449652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe265f14e460c9efcf03a16b732cff0a6f91026639076e5111b79df636fa2fb3-rootfs.mount: Deactivated successfully. May 17 03:48:06.150904 kubelet[2567]: E0517 03:48:06.149940 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw9vx" podUID="ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134" May 17 03:48:06.399978 containerd[1462]: time="2025-05-17T03:48:06.399863269Z" level=info msg="shim disconnected" id=fe265f14e460c9efcf03a16b732cff0a6f91026639076e5111b79df636fa2fb3 namespace=k8s.io May 17 03:48:06.401182 containerd[1462]: time="2025-05-17T03:48:06.400324892Z" level=warning msg="cleaning up after shim disconnected" id=fe265f14e460c9efcf03a16b732cff0a6f91026639076e5111b79df636fa2fb3 namespace=k8s.io May 17 03:48:06.401182 containerd[1462]: time="2025-05-17T03:48:06.400359677Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 03:48:07.203427 kubelet[2567]: I0517 03:48:07.202887 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 03:48:07.417120 containerd[1462]: time="2025-05-17T03:48:07.417036148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 03:48:08.150325 kubelet[2567]: E0517 03:48:08.149903 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw9vx" podUID="ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134" May 17 03:48:10.153774 kubelet[2567]: E0517 03:48:10.153706 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw9vx" podUID="ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134" May 17 03:48:12.151051 kubelet[2567]: E0517 03:48:12.150509 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw9vx" podUID="ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134" May 17 03:48:12.631850 containerd[1462]: time="2025-05-17T03:48:12.631787369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:12.633534 containerd[1462]: time="2025-05-17T03:48:12.633478149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 03:48:12.635002 containerd[1462]: time="2025-05-17T03:48:12.634952801Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:12.639318 containerd[1462]: time="2025-05-17T03:48:12.638459454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:12.639318 containerd[1462]: time="2025-05-17T03:48:12.639171833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 5.222034045s" May 17 03:48:12.639318 containerd[1462]: time="2025-05-17T03:48:12.639217650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 03:48:12.648737 containerd[1462]: time="2025-05-17T03:48:12.648704182Z" level=info msg="CreateContainer within sandbox \"fa61e0ebc5ead2284243cbceba5f0d1b51f3d63709f0039ff6a71561dc0edefe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 03:48:12.672901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327041194.mount: Deactivated successfully. May 17 03:48:12.683535 containerd[1462]: time="2025-05-17T03:48:12.683485656Z" level=info msg="CreateContainer within sandbox \"fa61e0ebc5ead2284243cbceba5f0d1b51f3d63709f0039ff6a71561dc0edefe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e1b7ba4bfc8a5eb0b887af29f9e7456917bf09a753c30a2dc5af4b79d47655fa\"" May 17 03:48:12.686237 containerd[1462]: time="2025-05-17T03:48:12.684504413Z" level=info msg="StartContainer for \"e1b7ba4bfc8a5eb0b887af29f9e7456917bf09a753c30a2dc5af4b79d47655fa\"" May 17 03:48:12.741357 systemd[1]: Started cri-containerd-e1b7ba4bfc8a5eb0b887af29f9e7456917bf09a753c30a2dc5af4b79d47655fa.scope - libcontainer container e1b7ba4bfc8a5eb0b887af29f9e7456917bf09a753c30a2dc5af4b79d47655fa. May 17 03:48:12.773643 containerd[1462]: time="2025-05-17T03:48:12.773530335Z" level=info msg="StartContainer for \"e1b7ba4bfc8a5eb0b887af29f9e7456917bf09a753c30a2dc5af4b79d47655fa\" returns successfully" May 17 03:48:14.152797 kubelet[2567]: E0517 03:48:14.151823 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw9vx" podUID="ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134" May 17 03:48:14.534900 containerd[1462]: time="2025-05-17T03:48:14.534792394Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 03:48:14.540591 systemd[1]: cri-containerd-e1b7ba4bfc8a5eb0b887af29f9e7456917bf09a753c30a2dc5af4b79d47655fa.scope: Deactivated successfully. May 17 03:48:14.541033 systemd[1]: cri-containerd-e1b7ba4bfc8a5eb0b887af29f9e7456917bf09a753c30a2dc5af4b79d47655fa.scope: Consumed 1.103s CPU time. May 17 03:48:14.576647 kubelet[2567]: I0517 03:48:14.575544 2567 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 03:48:14.593953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1b7ba4bfc8a5eb0b887af29f9e7456917bf09a753c30a2dc5af4b79d47655fa-rootfs.mount: Deactivated successfully. May 17 03:48:15.300253 containerd[1462]: time="2025-05-17T03:48:15.299994711Z" level=info msg="shim disconnected" id=e1b7ba4bfc8a5eb0b887af29f9e7456917bf09a753c30a2dc5af4b79d47655fa namespace=k8s.io May 17 03:48:15.300253 containerd[1462]: time="2025-05-17T03:48:15.300098188Z" level=warning msg="cleaning up after shim disconnected" id=e1b7ba4bfc8a5eb0b887af29f9e7456917bf09a753c30a2dc5af4b79d47655fa namespace=k8s.io May 17 03:48:15.300253 containerd[1462]: time="2025-05-17T03:48:15.300120124Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 03:48:15.326617 systemd[1]: Created slice kubepods-burstable-pod97639112_d662_401e_9525_ef6c5cfa2196.slice - libcontainer container kubepods-burstable-pod97639112_d662_401e_9525_ef6c5cfa2196.slice. May 17 03:48:15.348965 kubelet[2567]: I0517 03:48:15.348925 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/100470f3-1018-4b21-81fe-cdd6b96f94f3-config-volume\") pod \"coredns-674b8bbfcf-kqtzv\" (UID: \"100470f3-1018-4b21-81fe-cdd6b96f94f3\") " pod="kube-system/coredns-674b8bbfcf-kqtzv" May 17 03:48:15.349246 systemd[1]: Created slice kubepods-burstable-pod100470f3_1018_4b21_81fe_cdd6b96f94f3.slice - libcontainer container kubepods-burstable-pod100470f3_1018_4b21_81fe_cdd6b96f94f3.slice. May 17 03:48:15.350026 kubelet[2567]: I0517 03:48:15.349188 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97639112-d662-401e-9525-ef6c5cfa2196-config-volume\") pod \"coredns-674b8bbfcf-wsfgb\" (UID: \"97639112-d662-401e-9525-ef6c5cfa2196\") " pod="kube-system/coredns-674b8bbfcf-wsfgb" May 17 03:48:15.350026 kubelet[2567]: I0517 03:48:15.349545 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg9bs\" (UniqueName: \"kubernetes.io/projected/97639112-d662-401e-9525-ef6c5cfa2196-kube-api-access-wg9bs\") pod \"coredns-674b8bbfcf-wsfgb\" (UID: \"97639112-d662-401e-9525-ef6c5cfa2196\") " pod="kube-system/coredns-674b8bbfcf-wsfgb" May 17 03:48:15.350026 kubelet[2567]: I0517 03:48:15.349951 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m72s\" (UniqueName: \"kubernetes.io/projected/100470f3-1018-4b21-81fe-cdd6b96f94f3-kube-api-access-4m72s\") pod \"coredns-674b8bbfcf-kqtzv\" (UID: \"100470f3-1018-4b21-81fe-cdd6b96f94f3\") " pod="kube-system/coredns-674b8bbfcf-kqtzv" May 17 03:48:15.368784 systemd[1]: Created slice kubepods-besteffort-podceca80df_b7ce_42e9_b2ed_1cd3aa7b6134.slice - libcontainer container kubepods-besteffort-podceca80df_b7ce_42e9_b2ed_1cd3aa7b6134.slice. May 17 03:48:15.374261 containerd[1462]: time="2025-05-17T03:48:15.374227737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw9vx,Uid:ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134,Namespace:calico-system,Attempt:0,}" May 17 03:48:15.380291 systemd[1]: Created slice kubepods-besteffort-podc7325861_b11f_4c03_8427_1ec9f970f69e.slice - libcontainer container kubepods-besteffort-podc7325861_b11f_4c03_8427_1ec9f970f69e.slice. May 17 03:48:15.391959 systemd[1]: Created slice kubepods-besteffort-pod9f1bd9b0_7eec_4bda_8bf7_c4484df07375.slice - libcontainer container kubepods-besteffort-pod9f1bd9b0_7eec_4bda_8bf7_c4484df07375.slice. May 17 03:48:15.418081 systemd[1]: Created slice kubepods-besteffort-pod5d31f0bb_0747_4e8f_868a_d7b2d8faa68d.slice - libcontainer container kubepods-besteffort-pod5d31f0bb_0747_4e8f_868a_d7b2d8faa68d.slice. May 17 03:48:15.437847 systemd[1]: Created slice kubepods-besteffort-pod01156fe6_8d16_47a0_a6f4_f7aee2dfcb6d.slice - libcontainer container kubepods-besteffort-pod01156fe6_8d16_47a0_a6f4_f7aee2dfcb6d.slice. May 17 03:48:15.447163 systemd[1]: Created slice kubepods-besteffort-pod08766a61_c1c3_45ec_a870_662027187849.slice - libcontainer container kubepods-besteffort-pod08766a61_c1c3_45ec_a870_662027187849.slice. May 17 03:48:15.451698 kubelet[2567]: I0517 03:48:15.450508 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d31f0bb-0747-4e8f-868a-d7b2d8faa68d-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-gbf9j\" (UID: \"5d31f0bb-0747-4e8f-868a-d7b2d8faa68d\") " pod="calico-system/goldmane-78d55f7ddc-gbf9j" May 17 03:48:15.451698 kubelet[2567]: I0517 03:48:15.450584 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvtfb\" (UniqueName: \"kubernetes.io/projected/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-kube-api-access-lvtfb\") pod \"whisker-578fbf45d9-l5h8n\" (UID: \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\") " pod="calico-system/whisker-578fbf45d9-l5h8n" May 17 03:48:15.451698 kubelet[2567]: I0517 03:48:15.450744 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/08766a61-c1c3-45ec-a870-662027187849-calico-apiserver-certs\") pod \"calico-apiserver-67566d8f66-qmgwp\" (UID: \"08766a61-c1c3-45ec-a870-662027187849\") " pod="calico-apiserver/calico-apiserver-67566d8f66-qmgwp" May 17 03:48:15.451698 kubelet[2567]: I0517 03:48:15.450834 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfl6k\" (UniqueName: \"kubernetes.io/projected/c7325861-b11f-4c03-8427-1ec9f970f69e-kube-api-access-cfl6k\") pod \"calico-kube-controllers-bbf4dcdfc-vlsx4\" (UID: \"c7325861-b11f-4c03-8427-1ec9f970f69e\") " pod="calico-system/calico-kube-controllers-bbf4dcdfc-vlsx4" May 17 03:48:15.451698 kubelet[2567]: I0517 03:48:15.451093 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d31f0bb-0747-4e8f-868a-d7b2d8faa68d-config\") pod \"goldmane-78d55f7ddc-gbf9j\" (UID: \"5d31f0bb-0747-4e8f-868a-d7b2d8faa68d\") " pod="calico-system/goldmane-78d55f7ddc-gbf9j" May 17 03:48:15.452048 kubelet[2567]: I0517 03:48:15.451123 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5d31f0bb-0747-4e8f-868a-d7b2d8faa68d-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-gbf9j\" (UID: \"5d31f0bb-0747-4e8f-868a-d7b2d8faa68d\") " pod="calico-system/goldmane-78d55f7ddc-gbf9j" May 17 03:48:15.452048 kubelet[2567]: I0517 03:48:15.451175 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zscb\" (UniqueName: \"kubernetes.io/projected/08766a61-c1c3-45ec-a870-662027187849-kube-api-access-6zscb\") pod \"calico-apiserver-67566d8f66-qmgwp\" (UID: \"08766a61-c1c3-45ec-a870-662027187849\") " pod="calico-apiserver/calico-apiserver-67566d8f66-qmgwp" May 17 03:48:15.452048 kubelet[2567]: I0517 03:48:15.451232 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-whisker-backend-key-pair\") pod \"whisker-578fbf45d9-l5h8n\" (UID: \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\") " pod="calico-system/whisker-578fbf45d9-l5h8n" May 17 03:48:15.452048 kubelet[2567]: I0517 03:48:15.451257 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-whisker-ca-bundle\") pod \"whisker-578fbf45d9-l5h8n\" (UID: \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\") " pod="calico-system/whisker-578fbf45d9-l5h8n" May 17 03:48:15.452048 kubelet[2567]: I0517 03:48:15.451311 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdp4\" (UniqueName: \"kubernetes.io/projected/5d31f0bb-0747-4e8f-868a-d7b2d8faa68d-kube-api-access-ccdp4\") pod \"goldmane-78d55f7ddc-gbf9j\" (UID: \"5d31f0bb-0747-4e8f-868a-d7b2d8faa68d\") " pod="calico-system/goldmane-78d55f7ddc-gbf9j" May 17 03:48:15.452179 kubelet[2567]: I0517 03:48:15.451350 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9f1bd9b0-7eec-4bda-8bf7-c4484df07375-calico-apiserver-certs\") pod \"calico-apiserver-67566d8f66-fbrbb\" (UID: \"9f1bd9b0-7eec-4bda-8bf7-c4484df07375\") " pod="calico-apiserver/calico-apiserver-67566d8f66-fbrbb" May 17 03:48:15.452179 kubelet[2567]: I0517 03:48:15.451576 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7325861-b11f-4c03-8427-1ec9f970f69e-tigera-ca-bundle\") pod \"calico-kube-controllers-bbf4dcdfc-vlsx4\" (UID: \"c7325861-b11f-4c03-8427-1ec9f970f69e\") " pod="calico-system/calico-kube-controllers-bbf4dcdfc-vlsx4" May 17 03:48:15.452179 kubelet[2567]: I0517 03:48:15.452040 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhvgg\" (UniqueName: \"kubernetes.io/projected/9f1bd9b0-7eec-4bda-8bf7-c4484df07375-kube-api-access-dhvgg\") pod \"calico-apiserver-67566d8f66-fbrbb\" (UID: \"9f1bd9b0-7eec-4bda-8bf7-c4484df07375\") " pod="calico-apiserver/calico-apiserver-67566d8f66-fbrbb" May 17 03:48:15.462231 containerd[1462]: time="2025-05-17T03:48:15.461619539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 03:48:15.513813 containerd[1462]: time="2025-05-17T03:48:15.513750422Z" level=error msg="Failed to destroy network for sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.515214 containerd[1462]: time="2025-05-17T03:48:15.514633232Z" level=error msg="encountered an error cleaning up failed sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.515214 containerd[1462]: time="2025-05-17T03:48:15.514688298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw9vx,Uid:ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.515306 kubelet[2567]: E0517 03:48:15.514877 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.515306 kubelet[2567]: E0517 03:48:15.514939 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw9vx" May 17 03:48:15.515306 kubelet[2567]: E0517 03:48:15.514965 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw9vx" May 17 03:48:15.515396 kubelet[2567]: E0517 03:48:15.515021 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kw9vx_calico-system(ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kw9vx_calico-system(ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kw9vx" podUID="ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134" May 17 03:48:15.600007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a-shm.mount: Deactivated successfully. May 17 03:48:15.654477 containerd[1462]: time="2025-05-17T03:48:15.654329942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wsfgb,Uid:97639112-d662-401e-9525-ef6c5cfa2196,Namespace:kube-system,Attempt:0,}" May 17 03:48:15.661034 containerd[1462]: time="2025-05-17T03:48:15.660928695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqtzv,Uid:100470f3-1018-4b21-81fe-cdd6b96f94f3,Namespace:kube-system,Attempt:0,}" May 17 03:48:15.689024 containerd[1462]: time="2025-05-17T03:48:15.688947882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bbf4dcdfc-vlsx4,Uid:c7325861-b11f-4c03-8427-1ec9f970f69e,Namespace:calico-system,Attempt:0,}" May 17 03:48:15.714263 containerd[1462]: time="2025-05-17T03:48:15.713821455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67566d8f66-fbrbb,Uid:9f1bd9b0-7eec-4bda-8bf7-c4484df07375,Namespace:calico-apiserver,Attempt:0,}" May 17 03:48:15.725446 containerd[1462]: time="2025-05-17T03:48:15.725394677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-gbf9j,Uid:5d31f0bb-0747-4e8f-868a-d7b2d8faa68d,Namespace:calico-system,Attempt:0,}" May 17 03:48:15.747332 containerd[1462]: time="2025-05-17T03:48:15.747159693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-578fbf45d9-l5h8n,Uid:01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d,Namespace:calico-system,Attempt:0,}" May 17 03:48:15.758569 containerd[1462]: time="2025-05-17T03:48:15.758525340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67566d8f66-qmgwp,Uid:08766a61-c1c3-45ec-a870-662027187849,Namespace:calico-apiserver,Attempt:0,}" May 17 03:48:15.838244 containerd[1462]: time="2025-05-17T03:48:15.837926950Z" level=error msg="Failed to destroy network for sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.839673 containerd[1462]: time="2025-05-17T03:48:15.839635181Z" level=error msg="encountered an error cleaning up failed sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.840417 containerd[1462]: time="2025-05-17T03:48:15.840092951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wsfgb,Uid:97639112-d662-401e-9525-ef6c5cfa2196,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.840900 kubelet[2567]: E0517 03:48:15.840859 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.840958 kubelet[2567]: E0517 03:48:15.840922 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wsfgb" May 17 03:48:15.840958 kubelet[2567]: E0517 03:48:15.840949 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wsfgb" May 17 03:48:15.841036 kubelet[2567]: E0517 03:48:15.840998 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wsfgb_kube-system(97639112-d662-401e-9525-ef6c5cfa2196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wsfgb_kube-system(97639112-d662-401e-9525-ef6c5cfa2196)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wsfgb" podUID="97639112-d662-401e-9525-ef6c5cfa2196" May 17 03:48:15.942515 containerd[1462]: time="2025-05-17T03:48:15.942457650Z" level=error msg="Failed to destroy network for sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.944567 containerd[1462]: time="2025-05-17T03:48:15.944409392Z" level=error msg="encountered an error cleaning up failed sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.944567 containerd[1462]: time="2025-05-17T03:48:15.944470280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqtzv,Uid:100470f3-1018-4b21-81fe-cdd6b96f94f3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.945105 kubelet[2567]: E0517 03:48:15.944706 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.945105 kubelet[2567]: E0517 03:48:15.944769 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kqtzv" May 17 03:48:15.945105 kubelet[2567]: E0517 03:48:15.944792 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kqtzv" May 17 03:48:15.945304 kubelet[2567]: E0517 03:48:15.944846 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kqtzv_kube-system(100470f3-1018-4b21-81fe-cdd6b96f94f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kqtzv_kube-system(100470f3-1018-4b21-81fe-cdd6b96f94f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kqtzv" podUID="100470f3-1018-4b21-81fe-cdd6b96f94f3" May 17 03:48:15.989881 containerd[1462]: time="2025-05-17T03:48:15.989775434Z" level=error msg="Failed to destroy network for sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.990928 containerd[1462]: time="2025-05-17T03:48:15.990869528Z" level=error msg="Failed to destroy network for sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.991347 containerd[1462]: time="2025-05-17T03:48:15.991280239Z" level=error msg="encountered an error cleaning up failed sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.991453 containerd[1462]: time="2025-05-17T03:48:15.991410953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67566d8f66-qmgwp,Uid:08766a61-c1c3-45ec-a870-662027187849,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.993767 containerd[1462]: time="2025-05-17T03:48:15.992481587Z" level=error msg="encountered an error cleaning up failed sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.994398 containerd[1462]: time="2025-05-17T03:48:15.994294808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67566d8f66-fbrbb,Uid:9f1bd9b0-7eec-4bda-8bf7-c4484df07375,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.994669 kubelet[2567]: E0517 03:48:15.994276 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.994669 kubelet[2567]: E0517 03:48:15.994408 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67566d8f66-qmgwp" May 17 03:48:15.994669 kubelet[2567]: E0517 03:48:15.994431 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67566d8f66-qmgwp" May 17 03:48:15.995144 containerd[1462]: time="2025-05-17T03:48:15.993473046Z" level=error msg="Failed to destroy network for sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.997082 kubelet[2567]: E0517 03:48:15.994545 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67566d8f66-qmgwp_calico-apiserver(08766a61-c1c3-45ec-a870-662027187849)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67566d8f66-qmgwp_calico-apiserver(08766a61-c1c3-45ec-a870-662027187849)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67566d8f66-qmgwp" podUID="08766a61-c1c3-45ec-a870-662027187849" May 17 03:48:15.997082 kubelet[2567]: E0517 03:48:15.995947 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:15.997082 kubelet[2567]: E0517 03:48:15.996063 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67566d8f66-fbrbb" May 17 03:48:15.997275 kubelet[2567]: E0517 03:48:15.996089 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67566d8f66-fbrbb" May 17 03:48:15.997275 kubelet[2567]: E0517 03:48:15.996155 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67566d8f66-fbrbb_calico-apiserver(9f1bd9b0-7eec-4bda-8bf7-c4484df07375)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67566d8f66-fbrbb_calico-apiserver(9f1bd9b0-7eec-4bda-8bf7-c4484df07375)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67566d8f66-fbrbb" podUID="9f1bd9b0-7eec-4bda-8bf7-c4484df07375" May 17 03:48:15.998639 containerd[1462]: time="2025-05-17T03:48:15.997973490Z" level=error msg="encountered an error cleaning up failed sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.000343 containerd[1462]: time="2025-05-17T03:48:16.000287370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-578fbf45d9-l5h8n,Uid:01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.001483 kubelet[2567]: E0517 03:48:16.001403 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.001483 kubelet[2567]: E0517 03:48:16.001467 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-578fbf45d9-l5h8n" May 17 03:48:16.001687 kubelet[2567]: E0517 03:48:16.001491 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-578fbf45d9-l5h8n" May 17 03:48:16.001687 kubelet[2567]: E0517 03:48:16.001566 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-578fbf45d9-l5h8n_calico-system(01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-578fbf45d9-l5h8n_calico-system(01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-578fbf45d9-l5h8n" podUID="01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d" May 17 03:48:16.010633 containerd[1462]: time="2025-05-17T03:48:16.010583969Z" level=error msg="Failed to destroy network for sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.010992 containerd[1462]: time="2025-05-17T03:48:16.010953531Z" level=error msg="encountered an error cleaning up failed sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.011072 containerd[1462]: time="2025-05-17T03:48:16.011028107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bbf4dcdfc-vlsx4,Uid:c7325861-b11f-4c03-8427-1ec9f970f69e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.011794 kubelet[2567]: E0517 03:48:16.011368 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.011794 kubelet[2567]: E0517 03:48:16.011424 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bbf4dcdfc-vlsx4" May 17 03:48:16.011794 kubelet[2567]: E0517 03:48:16.011462 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bbf4dcdfc-vlsx4" May 17 03:48:16.012064 kubelet[2567]: E0517 03:48:16.011521 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bbf4dcdfc-vlsx4_calico-system(c7325861-b11f-4c03-8427-1ec9f970f69e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bbf4dcdfc-vlsx4_calico-system(c7325861-b11f-4c03-8427-1ec9f970f69e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bbf4dcdfc-vlsx4" podUID="c7325861-b11f-4c03-8427-1ec9f970f69e" May 17 03:48:16.015379 containerd[1462]: time="2025-05-17T03:48:16.015338246Z" level=error msg="Failed to destroy network for sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.016039 containerd[1462]: time="2025-05-17T03:48:16.016011072Z" level=error msg="encountered an error cleaning up failed sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.016353 containerd[1462]: time="2025-05-17T03:48:16.016293333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-gbf9j,Uid:5d31f0bb-0747-4e8f-868a-d7b2d8faa68d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.016676 kubelet[2567]: E0517 03:48:16.016619 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.016729 kubelet[2567]: E0517 03:48:16.016691 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-gbf9j" May 17 03:48:16.016768 kubelet[2567]: E0517 03:48:16.016716 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-gbf9j" May 17 03:48:16.016881 kubelet[2567]: E0517 03:48:16.016833 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-gbf9j_calico-system(5d31f0bb-0747-4e8f-868a-d7b2d8faa68d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-gbf9j_calico-system(5d31f0bb-0747-4e8f-868a-d7b2d8faa68d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:48:16.467273 kubelet[2567]: I0517 03:48:16.465998 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:16.468143 containerd[1462]: time="2025-05-17T03:48:16.467700862Z" level=info msg="StopPodSandbox for \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\"" May 17 03:48:16.469723 containerd[1462]: time="2025-05-17T03:48:16.469424175Z" level=info msg="Ensure that sandbox 151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4 in task-service has been cleanup successfully" May 17 03:48:16.472270 kubelet[2567]: I0517 03:48:16.471785 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:16.478614 containerd[1462]: time="2025-05-17T03:48:16.478544167Z" level=info msg="StopPodSandbox for \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\"" May 17 03:48:16.478812 kubelet[2567]: I0517 03:48:16.478693 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:16.479758 containerd[1462]: time="2025-05-17T03:48:16.479590134Z" level=info msg="Ensure that sandbox 33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a in task-service has been cleanup successfully" May 17 03:48:16.485595 containerd[1462]: time="2025-05-17T03:48:16.485463701Z" level=info msg="StopPodSandbox for \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\"" May 17 03:48:16.486144 containerd[1462]: time="2025-05-17T03:48:16.486086363Z" level=info msg="Ensure that sandbox d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13 in task-service has been cleanup successfully" May 17 03:48:16.499614 kubelet[2567]: I0517 03:48:16.499545 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:16.507340 containerd[1462]: time="2025-05-17T03:48:16.507058701Z" level=info msg="StopPodSandbox for \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\"" May 17 03:48:16.509262 containerd[1462]: time="2025-05-17T03:48:16.509041214Z" level=info msg="Ensure that sandbox 4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84 in task-service has been cleanup successfully" May 17 03:48:16.510540 kubelet[2567]: I0517 03:48:16.510443 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:16.516325 containerd[1462]: time="2025-05-17T03:48:16.516078233Z" level=info msg="StopPodSandbox for \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\"" May 17 03:48:16.520746 containerd[1462]: time="2025-05-17T03:48:16.520152941Z" level=info msg="Ensure that sandbox bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d in task-service has been cleanup successfully" May 17 03:48:16.533145 kubelet[2567]: I0517 03:48:16.533092 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:16.537426 containerd[1462]: time="2025-05-17T03:48:16.537383096Z" level=info msg="StopPodSandbox for \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\"" May 17 03:48:16.539247 containerd[1462]: time="2025-05-17T03:48:16.537935561Z" level=info msg="Ensure that sandbox d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4 in task-service has been cleanup successfully" May 17 03:48:16.566449 kubelet[2567]: I0517 03:48:16.566423 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:16.568858 containerd[1462]: time="2025-05-17T03:48:16.568432217Z" level=info msg="StopPodSandbox for \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\"" May 17 03:48:16.569623 containerd[1462]: time="2025-05-17T03:48:16.569187446Z" level=info msg="Ensure that sandbox 58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0 in task-service has been cleanup successfully" May 17 03:48:16.577730 containerd[1462]: time="2025-05-17T03:48:16.577679988Z" level=error msg="StopPodSandbox for \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\" failed" error="failed to destroy network for sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.578809 kubelet[2567]: E0517 03:48:16.578770 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:16.578898 kubelet[2567]: E0517 03:48:16.578818 2567 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a"} May 17 03:48:16.578978 kubelet[2567]: E0517 03:48:16.578952 2567 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 03:48:16.579061 kubelet[2567]: E0517 03:48:16.578985 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kw9vx" podUID="ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134" May 17 03:48:16.588396 kubelet[2567]: I0517 03:48:16.588365 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:16.593728 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4-shm.mount: Deactivated successfully. May 17 03:48:16.593827 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84-shm.mount: Deactivated successfully. May 17 03:48:16.606036 containerd[1462]: time="2025-05-17T03:48:16.605993520Z" level=info msg="StopPodSandbox for \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\"" May 17 03:48:16.607172 containerd[1462]: time="2025-05-17T03:48:16.607151060Z" level=info msg="Ensure that sandbox d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589 in task-service has been cleanup successfully" May 17 03:48:16.632749 containerd[1462]: time="2025-05-17T03:48:16.632591477Z" level=error msg="StopPodSandbox for \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\" failed" error="failed to destroy network for sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.633243 kubelet[2567]: E0517 03:48:16.633108 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:16.633243 kubelet[2567]: E0517 03:48:16.633164 2567 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84"} May 17 03:48:16.633575 kubelet[2567]: E0517 03:48:16.633475 2567 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97639112-d662-401e-9525-ef6c5cfa2196\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 03:48:16.633575 kubelet[2567]: E0517 03:48:16.633538 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97639112-d662-401e-9525-ef6c5cfa2196\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wsfgb" podUID="97639112-d662-401e-9525-ef6c5cfa2196" May 17 03:48:16.653385 containerd[1462]: time="2025-05-17T03:48:16.653333884Z" level=error msg="StopPodSandbox for \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\" failed" error="failed to destroy network for sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.655486 kubelet[2567]: E0517 03:48:16.655436 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:16.655577 kubelet[2567]: E0517 03:48:16.655502 2567 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4"} May 17 03:48:16.655577 kubelet[2567]: E0517 03:48:16.655539 2567 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7325861-b11f-4c03-8427-1ec9f970f69e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 03:48:16.655689 kubelet[2567]: E0517 03:48:16.655566 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7325861-b11f-4c03-8427-1ec9f970f69e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bbf4dcdfc-vlsx4" podUID="c7325861-b11f-4c03-8427-1ec9f970f69e" May 17 03:48:16.669695 containerd[1462]: time="2025-05-17T03:48:16.669640028Z" level=error msg="StopPodSandbox for \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\" failed" error="failed to destroy network for sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.670386 kubelet[2567]: E0517 03:48:16.670339 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:16.670485 kubelet[2567]: E0517 03:48:16.670408 2567 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0"} May 17 03:48:16.670485 kubelet[2567]: E0517 03:48:16.670450 2567 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08766a61-c1c3-45ec-a870-662027187849\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 03:48:16.670577 kubelet[2567]: E0517 03:48:16.670485 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08766a61-c1c3-45ec-a870-662027187849\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67566d8f66-qmgwp" podUID="08766a61-c1c3-45ec-a870-662027187849" May 17 03:48:16.673528 containerd[1462]: time="2025-05-17T03:48:16.673458900Z" level=error msg="StopPodSandbox for \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\" failed" error="failed to destroy network for sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.674024 kubelet[2567]: E0517 03:48:16.673775 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:16.674024 kubelet[2567]: E0517 03:48:16.673845 2567 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d"} May 17 03:48:16.674024 kubelet[2567]: E0517 03:48:16.673937 2567 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f1bd9b0-7eec-4bda-8bf7-c4484df07375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 03:48:16.674024 kubelet[2567]: E0517 03:48:16.673975 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f1bd9b0-7eec-4bda-8bf7-c4484df07375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67566d8f66-fbrbb" podUID="9f1bd9b0-7eec-4bda-8bf7-c4484df07375" May 17 03:48:16.676070 containerd[1462]: time="2025-05-17T03:48:16.676031117Z" level=error msg="StopPodSandbox for \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\" failed" error="failed to destroy network for sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.676384 kubelet[2567]: E0517 03:48:16.676285 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:16.676384 kubelet[2567]: E0517 03:48:16.676332 2567 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4"} May 17 03:48:16.676621 kubelet[2567]: E0517 03:48:16.676531 2567 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"100470f3-1018-4b21-81fe-cdd6b96f94f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 03:48:16.676621 kubelet[2567]: E0517 03:48:16.676587 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"100470f3-1018-4b21-81fe-cdd6b96f94f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kqtzv" podUID="100470f3-1018-4b21-81fe-cdd6b96f94f3" May 17 03:48:16.677496 containerd[1462]: time="2025-05-17T03:48:16.677435803Z" level=error msg="StopPodSandbox for \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\" failed" error="failed to destroy network for sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.677810 kubelet[2567]: E0517 03:48:16.677760 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:16.677891 kubelet[2567]: E0517 03:48:16.677816 2567 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13"} May 17 03:48:16.677891 kubelet[2567]: E0517 03:48:16.677850 2567 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 03:48:16.677891 kubelet[2567]: E0517 03:48:16.677878 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-578fbf45d9-l5h8n" podUID="01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d" May 17 03:48:16.699851 containerd[1462]: time="2025-05-17T03:48:16.699789689Z" level=error msg="StopPodSandbox for \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\" failed" error="failed to destroy network for sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 03:48:16.700253 kubelet[2567]: E0517 03:48:16.700132 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:16.700371 kubelet[2567]: E0517 03:48:16.700184 2567 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589"} May 17 03:48:16.700494 kubelet[2567]: E0517 03:48:16.700473 2567 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d31f0bb-0747-4e8f-868a-d7b2d8faa68d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 03:48:16.700672 kubelet[2567]: E0517 03:48:16.700553 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d31f0bb-0747-4e8f-868a-d7b2d8faa68d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:48:25.852680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671915004.mount: Deactivated successfully. May 17 03:48:26.517071 containerd[1462]: time="2025-05-17T03:48:26.516711505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:26.529796 containerd[1462]: time="2025-05-17T03:48:26.529493610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 03:48:26.543584 containerd[1462]: time="2025-05-17T03:48:26.543343912Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:26.551885 containerd[1462]: time="2025-05-17T03:48:26.551548537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:26.554606 containerd[1462]: time="2025-05-17T03:48:26.554083406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 11.092401067s" May 17 03:48:26.554606 containerd[1462]: time="2025-05-17T03:48:26.554252913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 03:48:26.646288 containerd[1462]: time="2025-05-17T03:48:26.646227847Z" level=info msg="CreateContainer within sandbox \"fa61e0ebc5ead2284243cbceba5f0d1b51f3d63709f0039ff6a71561dc0edefe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 03:48:26.685365 containerd[1462]: time="2025-05-17T03:48:26.685302975Z" level=info msg="CreateContainer within sandbox \"fa61e0ebc5ead2284243cbceba5f0d1b51f3d63709f0039ff6a71561dc0edefe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4478d9ad2989a7801120da2e23fcb3dbda37a5c427118e76ff0db9eee3d4549c\"" May 17 03:48:26.688294 containerd[1462]: time="2025-05-17T03:48:26.687361049Z" level=info msg="StartContainer for \"4478d9ad2989a7801120da2e23fcb3dbda37a5c427118e76ff0db9eee3d4549c\"" May 17 03:48:26.751423 systemd[1]: Started cri-containerd-4478d9ad2989a7801120da2e23fcb3dbda37a5c427118e76ff0db9eee3d4549c.scope - libcontainer container 4478d9ad2989a7801120da2e23fcb3dbda37a5c427118e76ff0db9eee3d4549c. May 17 03:48:26.823640 containerd[1462]: time="2025-05-17T03:48:26.823487482Z" level=info msg="StartContainer for \"4478d9ad2989a7801120da2e23fcb3dbda37a5c427118e76ff0db9eee3d4549c\" returns successfully" May 17 03:48:26.984255 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 03:48:26.984451 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 03:48:27.152485 containerd[1462]: time="2025-05-17T03:48:27.149016736Z" level=info msg="StopPodSandbox for \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\"" May 17 03:48:27.152485 containerd[1462]: time="2025-05-17T03:48:27.149822182Z" level=info msg="StopPodSandbox for \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\"" May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.318 [INFO][3809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.319 [INFO][3809] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" iface="eth0" netns="/var/run/netns/cni-434a9de0-82c5-1594-cd64-37ee25d9660d" May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.319 [INFO][3809] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" iface="eth0" netns="/var/run/netns/cni-434a9de0-82c5-1594-cd64-37ee25d9660d" May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.321 [INFO][3809] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" iface="eth0" netns="/var/run/netns/cni-434a9de0-82c5-1594-cd64-37ee25d9660d" May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.321 [INFO][3809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.321 [INFO][3809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.381 [INFO][3822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" HandleID="k8s-pod-network.d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.381 [INFO][3822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.382 [INFO][3822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.393 [WARNING][3822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" HandleID="k8s-pod-network.d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.393 [INFO][3822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" HandleID="k8s-pod-network.d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.395 [INFO][3822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:27.401845 containerd[1462]: 2025-05-17 03:48:27.399 [INFO][3809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:27.403922 containerd[1462]: time="2025-05-17T03:48:27.403831046Z" level=info msg="TearDown network for sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\" successfully" May 17 03:48:27.404025 containerd[1462]: time="2025-05-17T03:48:27.404005703Z" level=info msg="StopPodSandbox for \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\" returns successfully" May 17 03:48:27.407833 containerd[1462]: time="2025-05-17T03:48:27.407794347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqtzv,Uid:100470f3-1018-4b21-81fe-cdd6b96f94f3,Namespace:kube-system,Attempt:1,}" May 17 03:48:27.408506 systemd[1]: run-netns-cni\x2d434a9de0\x2d82c5\x2d1594\x2dcd64\x2d37ee25d9660d.mount: Deactivated successfully. May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.323 [INFO][3808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.324 [INFO][3808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" iface="eth0" netns="/var/run/netns/cni-5b67167b-a2ae-3d3b-b469-6ee68316e651" May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.325 [INFO][3808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" iface="eth0" netns="/var/run/netns/cni-5b67167b-a2ae-3d3b-b469-6ee68316e651" May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.327 [INFO][3808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" iface="eth0" netns="/var/run/netns/cni-5b67167b-a2ae-3d3b-b469-6ee68316e651" May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.327 [INFO][3808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.327 [INFO][3808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.400 [INFO][3824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" HandleID="k8s-pod-network.d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.401 [INFO][3824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.401 [INFO][3824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.421 [WARNING][3824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" HandleID="k8s-pod-network.d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.421 [INFO][3824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" HandleID="k8s-pod-network.d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.423 [INFO][3824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:27.428449 containerd[1462]: 2025-05-17 03:48:27.426 [INFO][3808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:27.429015 containerd[1462]: time="2025-05-17T03:48:27.428586317Z" level=info msg="TearDown network for sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\" successfully" May 17 03:48:27.429015 containerd[1462]: time="2025-05-17T03:48:27.428617200Z" level=info msg="StopPodSandbox for \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\" returns successfully" May 17 03:48:27.434615 systemd[1]: run-netns-cni\x2d5b67167b\x2da2ae\x2d3d3b\x2db469\x2d6ee68316e651.mount: Deactivated successfully. May 17 03:48:27.561048 kubelet[2567]: I0517 03:48:27.559845 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-whisker-ca-bundle\") pod \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\" (UID: \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\") " May 17 03:48:27.561048 kubelet[2567]: I0517 03:48:27.559937 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-whisker-backend-key-pair\") pod \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\" (UID: \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\") " May 17 03:48:27.561048 kubelet[2567]: I0517 03:48:27.560389 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d" (UID: "01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 03:48:27.561048 kubelet[2567]: I0517 03:48:27.560501 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvtfb\" (UniqueName: \"kubernetes.io/projected/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-kube-api-access-lvtfb\") pod \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\" (UID: \"01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d\") " May 17 03:48:27.561048 kubelet[2567]: I0517 03:48:27.560612 2567 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-whisker-ca-bundle\") on node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" DevicePath \"\"" May 17 03:48:27.579053 kubelet[2567]: I0517 03:48:27.578995 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d" (UID: "01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 03:48:27.582399 kubelet[2567]: I0517 03:48:27.582333 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-kube-api-access-lvtfb" (OuterVolumeSpecName: "kube-api-access-lvtfb") pod "01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d" (UID: "01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d"). InnerVolumeSpecName "kube-api-access-lvtfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 03:48:27.662516 kubelet[2567]: I0517 03:48:27.661368 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvtfb\" (UniqueName: \"kubernetes.io/projected/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-kube-api-access-lvtfb\") on node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" DevicePath \"\"" May 17 03:48:27.662774 kubelet[2567]: I0517 03:48:27.662691 2567 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d-whisker-backend-key-pair\") on node \"ci-4081-3-3-n-2f0bbd4ac2.novalocal\" DevicePath \"\"" May 17 03:48:27.683776 systemd[1]: Removed slice kubepods-besteffort-pod01156fe6_8d16_47a0_a6f4_f7aee2dfcb6d.slice - libcontainer container kubepods-besteffort-pod01156fe6_8d16_47a0_a6f4_f7aee2dfcb6d.slice. May 17 03:48:27.717750 kubelet[2567]: I0517 03:48:27.715733 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qts46" podStartSLOduration=2.359817722 podStartE2EDuration="28.715706251s" podCreationTimestamp="2025-05-17 03:47:59 +0000 UTC" firstStartedPulling="2025-05-17 03:48:00.200954516 +0000 UTC m=+20.166539790" lastFinishedPulling="2025-05-17 03:48:26.556842985 +0000 UTC m=+46.522428319" observedRunningTime="2025-05-17 03:48:27.71376845 +0000 UTC m=+47.679353724" watchObservedRunningTime="2025-05-17 03:48:27.715706251 +0000 UTC m=+47.681291525" May 17 03:48:27.768564 systemd-networkd[1368]: calif86826a9ad5: Link UP May 17 03:48:27.772565 systemd-networkd[1368]: calif86826a9ad5: Gained carrier May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.523 [INFO][3842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.542 [INFO][3842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0 coredns-674b8bbfcf- kube-system 100470f3-1018-4b21-81fe-cdd6b96f94f3 923 0 2025-05-17 03:47:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-2f0bbd4ac2.novalocal coredns-674b8bbfcf-kqtzv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif86826a9ad5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqtzv" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.542 [INFO][3842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqtzv" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.610 [INFO][3851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" HandleID="k8s-pod-network.7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.610 [INFO][3851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" HandleID="k8s-pod-network.7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000321820), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-2f0bbd4ac2.novalocal", "pod":"coredns-674b8bbfcf-kqtzv", "timestamp":"2025-05-17 03:48:27.609891241 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.611 [INFO][3851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.611 [INFO][3851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.611 [INFO][3851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2f0bbd4ac2.novalocal' May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.625 [INFO][3851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.641 [INFO][3851] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.659 [INFO][3851] ipam/ipam.go 543: Ran out of existing affine blocks for host host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.664 [INFO][3851] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.675 [INFO][3851] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.16.192/26 May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.675 [INFO][3851] ipam/ipam.go 572: Found unclaimed block host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" subnet=192.168.16.192/26 May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.675 [INFO][3851] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" subnet=192.168.16.192/26 May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.685 [INFO][3851] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" subnet=192.168.16.192/26 May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.686 [INFO][3851] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.690 [INFO][3851] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.695 [INFO][3851] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.699 [INFO][3851] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.699 [INFO][3851] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" subnet=192.168.16.192/26 May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.708 [INFO][3851] ipam/ipam_block_reader_writer.go 267: Successfully created block May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.708 [INFO][3851] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" subnet=192.168.16.192/26 May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.719 [INFO][3851] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" subnet=192.168.16.192/26 May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.721 [INFO][3851] ipam/ipam.go 607: Block '192.168.16.192/26' has 64 free ips which is more than 1 ips required. host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" subnet=192.168.16.192/26 May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.721 [INFO][3851] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.825059 containerd[1462]: 2025-05-17 03:48:27.727 [INFO][3851] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176 May 17 03:48:27.830155 containerd[1462]: 2025-05-17 03:48:27.737 [INFO][3851] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.830155 containerd[1462]: 2025-05-17 03:48:27.745 [INFO][3851] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.192/26] block=192.168.16.192/26 handle="k8s-pod-network.7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.830155 containerd[1462]: 2025-05-17 03:48:27.745 [INFO][3851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.192/26] handle="k8s-pod-network.7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:27.830155 containerd[1462]: 2025-05-17 03:48:27.745 [INFO][3851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:27.830155 containerd[1462]: 2025-05-17 03:48:27.746 [INFO][3851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.192/26] IPv6=[] ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" HandleID="k8s-pod-network.7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:27.830155 containerd[1462]: 2025-05-17 03:48:27.749 [INFO][3842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqtzv" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"100470f3-1018-4b21-81fe-cdd6b96f94f3", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"", Pod:"coredns-674b8bbfcf-kqtzv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif86826a9ad5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:27.830155 containerd[1462]: 2025-05-17 03:48:27.749 [INFO][3842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.192/32] ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqtzv" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:27.830155 containerd[1462]: 2025-05-17 03:48:27.749 [INFO][3842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif86826a9ad5 ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqtzv" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:27.830155 containerd[1462]: 2025-05-17 03:48:27.775 [INFO][3842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqtzv" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:27.830799 containerd[1462]: 2025-05-17 03:48:27.775 [INFO][3842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqtzv" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"100470f3-1018-4b21-81fe-cdd6b96f94f3", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176", Pod:"coredns-674b8bbfcf-kqtzv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif86826a9ad5", MAC:"8e:d2:50:47:4b:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:27.830799 containerd[1462]: 2025-05-17 03:48:27.820 [INFO][3842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqtzv" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:27.854640 systemd[1]: var-lib-kubelet-pods-01156fe6\x2d8d16\x2d47a0\x2da6f4\x2df7aee2dfcb6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlvtfb.mount: Deactivated successfully. May 17 03:48:27.854771 systemd[1]: var-lib-kubelet-pods-01156fe6\x2d8d16\x2d47a0\x2da6f4\x2df7aee2dfcb6d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 03:48:27.894393 containerd[1462]: time="2025-05-17T03:48:27.893622338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:48:27.894393 containerd[1462]: time="2025-05-17T03:48:27.893745620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:48:27.894393 containerd[1462]: time="2025-05-17T03:48:27.893783788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:27.894393 containerd[1462]: time="2025-05-17T03:48:27.893890907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:27.894589 systemd[1]: Created slice kubepods-besteffort-pod2a3cbd78_bd6f_48be_a6de_d94293efa7ac.slice - libcontainer container kubepods-besteffort-pod2a3cbd78_bd6f_48be_a6de_d94293efa7ac.slice. May 17 03:48:27.950049 systemd[1]: Started cri-containerd-7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176.scope - libcontainer container 7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176. May 17 03:48:27.976623 kubelet[2567]: I0517 03:48:27.976563 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a3cbd78-bd6f-48be-a6de-d94293efa7ac-whisker-backend-key-pair\") pod \"whisker-7d56654d85-kd7gz\" (UID: \"2a3cbd78-bd6f-48be-a6de-d94293efa7ac\") " pod="calico-system/whisker-7d56654d85-kd7gz" May 17 03:48:27.976623 kubelet[2567]: I0517 03:48:27.976618 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a3cbd78-bd6f-48be-a6de-d94293efa7ac-whisker-ca-bundle\") pod \"whisker-7d56654d85-kd7gz\" (UID: \"2a3cbd78-bd6f-48be-a6de-d94293efa7ac\") " pod="calico-system/whisker-7d56654d85-kd7gz" May 17 03:48:27.976850 kubelet[2567]: I0517 03:48:27.976646 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqwkd\" (UniqueName: \"kubernetes.io/projected/2a3cbd78-bd6f-48be-a6de-d94293efa7ac-kube-api-access-rqwkd\") pod \"whisker-7d56654d85-kd7gz\" (UID: \"2a3cbd78-bd6f-48be-a6de-d94293efa7ac\") " pod="calico-system/whisker-7d56654d85-kd7gz" May 17 03:48:28.008218 containerd[1462]: time="2025-05-17T03:48:28.007978507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqtzv,Uid:100470f3-1018-4b21-81fe-cdd6b96f94f3,Namespace:kube-system,Attempt:1,} returns sandbox id \"7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176\"" May 17 03:48:28.020672 containerd[1462]: time="2025-05-17T03:48:28.020625222Z" level=info msg="CreateContainer within sandbox \"7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 03:48:28.050235 containerd[1462]: time="2025-05-17T03:48:28.050169731Z" level=info msg="CreateContainer within sandbox \"7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b147aaec28c10cd56c6efff264a853e82a2bc5c5150b677961afc3cc09a3cee\"" May 17 03:48:28.051604 containerd[1462]: time="2025-05-17T03:48:28.051488241Z" level=info msg="StartContainer for \"5b147aaec28c10cd56c6efff264a853e82a2bc5c5150b677961afc3cc09a3cee\"" May 17 03:48:28.079354 systemd[1]: Started cri-containerd-5b147aaec28c10cd56c6efff264a853e82a2bc5c5150b677961afc3cc09a3cee.scope - libcontainer container 5b147aaec28c10cd56c6efff264a853e82a2bc5c5150b677961afc3cc09a3cee. May 17 03:48:28.114723 containerd[1462]: time="2025-05-17T03:48:28.114674751Z" level=info msg="StartContainer for \"5b147aaec28c10cd56c6efff264a853e82a2bc5c5150b677961afc3cc09a3cee\" returns successfully" May 17 03:48:28.151478 containerd[1462]: time="2025-05-17T03:48:28.151413985Z" level=info msg="StopPodSandbox for \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\"" May 17 03:48:28.157160 kubelet[2567]: I0517 03:48:28.157095 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d" path="/var/lib/kubelet/pods/01156fe6-8d16-47a0-a6f4-f7aee2dfcb6d/volumes" May 17 03:48:28.200358 containerd[1462]: time="2025-05-17T03:48:28.200181231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d56654d85-kd7gz,Uid:2a3cbd78-bd6f-48be-a6de-d94293efa7ac,Namespace:calico-system,Attempt:0,}" May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.221 [INFO][3978] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.221 [INFO][3978] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" iface="eth0" netns="/var/run/netns/cni-de17d9dc-31c5-1dc6-f0f4-43806a9f5ff2" May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.221 [INFO][3978] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" iface="eth0" netns="/var/run/netns/cni-de17d9dc-31c5-1dc6-f0f4-43806a9f5ff2" May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.221 [INFO][3978] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" iface="eth0" netns="/var/run/netns/cni-de17d9dc-31c5-1dc6-f0f4-43806a9f5ff2" May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.221 [INFO][3978] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.221 [INFO][3978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.255 [INFO][3985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" HandleID="k8s-pod-network.d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.256 [INFO][3985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.256 [INFO][3985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.266 [WARNING][3985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" HandleID="k8s-pod-network.d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.266 [INFO][3985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" HandleID="k8s-pod-network.d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.268 [INFO][3985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:28.274526 containerd[1462]: 2025-05-17 03:48:28.270 [INFO][3978] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:28.276668 containerd[1462]: time="2025-05-17T03:48:28.274613455Z" level=info msg="TearDown network for sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\" successfully" May 17 03:48:28.276668 containerd[1462]: time="2025-05-17T03:48:28.274666332Z" level=info msg="StopPodSandbox for \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\" returns successfully" May 17 03:48:28.277860 containerd[1462]: time="2025-05-17T03:48:28.277740037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-gbf9j,Uid:5d31f0bb-0747-4e8f-868a-d7b2d8faa68d,Namespace:calico-system,Attempt:1,}" May 17 03:48:28.396736 systemd-networkd[1368]: calia89077813ee: Link UP May 17 03:48:28.398643 systemd-networkd[1368]: calia89077813ee: Gained carrier May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.263 [INFO][3989] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.285 [INFO][3989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0 whisker-7d56654d85- calico-system 2a3cbd78-bd6f-48be-a6de-d94293efa7ac 948 0 2025-05-17 03:48:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d56654d85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-3-n-2f0bbd4ac2.novalocal whisker-7d56654d85-kd7gz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia89077813ee [] [] }} ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Namespace="calico-system" Pod="whisker-7d56654d85-kd7gz" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.285 [INFO][3989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Namespace="calico-system" Pod="whisker-7d56654d85-kd7gz" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.326 [INFO][4003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" HandleID="k8s-pod-network.1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.326 [INFO][4003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" HandleID="k8s-pod-network.1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac460), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-2f0bbd4ac2.novalocal", "pod":"whisker-7d56654d85-kd7gz", "timestamp":"2025-05-17 03:48:28.326445236 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.326 [INFO][4003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.326 [INFO][4003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.326 [INFO][4003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2f0bbd4ac2.novalocal' May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.339 [INFO][4003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.348 [INFO][4003] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.356 [INFO][4003] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.359 [INFO][4003] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.362 [INFO][4003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.363 [INFO][4003] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.365 [INFO][4003] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982 May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.373 [INFO][4003] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.385 [INFO][4003] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.193/26] block=192.168.16.192/26 handle="k8s-pod-network.1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.385 [INFO][4003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.193/26] handle="k8s-pod-network.1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.385 [INFO][4003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:28.430986 containerd[1462]: 2025-05-17 03:48:28.385 [INFO][4003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.193/26] IPv6=[] ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" HandleID="k8s-pod-network.1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0" May 17 03:48:28.431877 containerd[1462]: 2025-05-17 03:48:28.389 [INFO][3989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Namespace="calico-system" Pod="whisker-7d56654d85-kd7gz" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0", GenerateName:"whisker-7d56654d85-", Namespace:"calico-system", SelfLink:"", UID:"2a3cbd78-bd6f-48be-a6de-d94293efa7ac", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d56654d85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"", Pod:"whisker-7d56654d85-kd7gz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia89077813ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:28.431877 containerd[1462]: 2025-05-17 03:48:28.389 [INFO][3989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.193/32] ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Namespace="calico-system" Pod="whisker-7d56654d85-kd7gz" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0" May 17 03:48:28.431877 containerd[1462]: 2025-05-17 03:48:28.389 [INFO][3989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia89077813ee ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Namespace="calico-system" Pod="whisker-7d56654d85-kd7gz" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0" May 17 03:48:28.431877 containerd[1462]: 2025-05-17 03:48:28.400 [INFO][3989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Namespace="calico-system" Pod="whisker-7d56654d85-kd7gz" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0" May 17 03:48:28.431877 containerd[1462]: 2025-05-17 03:48:28.404 [INFO][3989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Namespace="calico-system" Pod="whisker-7d56654d85-kd7gz" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0", GenerateName:"whisker-7d56654d85-", Namespace:"calico-system", SelfLink:"", UID:"2a3cbd78-bd6f-48be-a6de-d94293efa7ac", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d56654d85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982", Pod:"whisker-7d56654d85-kd7gz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia89077813ee", MAC:"2a:3f:cf:0d:13:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:28.431877 containerd[1462]: 2025-05-17 03:48:28.427 [INFO][3989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982" Namespace="calico-system" Pod="whisker-7d56654d85-kd7gz" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--7d56654d85--kd7gz-eth0" May 17 03:48:28.477524 containerd[1462]: time="2025-05-17T03:48:28.476537010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:48:28.477524 containerd[1462]: time="2025-05-17T03:48:28.476627975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:48:28.477524 containerd[1462]: time="2025-05-17T03:48:28.476641293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:28.477524 containerd[1462]: time="2025-05-17T03:48:28.476951406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:28.506436 systemd[1]: Started cri-containerd-1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982.scope - libcontainer container 1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982. May 17 03:48:28.507925 systemd-networkd[1368]: cali7ee7ecac93c: Link UP May 17 03:48:28.509434 systemd-networkd[1368]: cali7ee7ecac93c: Gained carrier May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.346 [INFO][4007] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.364 [INFO][4007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0 goldmane-78d55f7ddc- calico-system 5d31f0bb-0747-4e8f-868a-d7b2d8faa68d 955 0 2025-05-17 03:47:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-3-n-2f0bbd4ac2.novalocal goldmane-78d55f7ddc-gbf9j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7ee7ecac93c [] [] }} ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gbf9j" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.365 [INFO][4007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gbf9j" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.411 [INFO][4022] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" HandleID="k8s-pod-network.3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.411 [INFO][4022] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" HandleID="k8s-pod-network.3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3900), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-2f0bbd4ac2.novalocal", "pod":"goldmane-78d55f7ddc-gbf9j", "timestamp":"2025-05-17 03:48:28.411121291 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.411 [INFO][4022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.411 [INFO][4022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.411 [INFO][4022] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2f0bbd4ac2.novalocal' May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.449 [INFO][4022] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.459 [INFO][4022] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.468 [INFO][4022] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.470 [INFO][4022] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.474 [INFO][4022] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.474 [INFO][4022] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.476 [INFO][4022] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988 May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.490 [INFO][4022] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.501 [INFO][4022] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.194/26] block=192.168.16.192/26 handle="k8s-pod-network.3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.501 [INFO][4022] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.194/26] handle="k8s-pod-network.3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.501 [INFO][4022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:28.533769 containerd[1462]: 2025-05-17 03:48:28.501 [INFO][4022] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.194/26] IPv6=[] ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" HandleID="k8s-pod-network.3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:28.535159 containerd[1462]: 2025-05-17 03:48:28.503 [INFO][4007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gbf9j" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"5d31f0bb-0747-4e8f-868a-d7b2d8faa68d", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"", Pod:"goldmane-78d55f7ddc-gbf9j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7ee7ecac93c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:28.535159 containerd[1462]: 2025-05-17 03:48:28.504 [INFO][4007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.194/32] ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gbf9j" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:28.535159 containerd[1462]: 2025-05-17 03:48:28.504 [INFO][4007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ee7ecac93c ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gbf9j" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:28.535159 containerd[1462]: 2025-05-17 03:48:28.510 [INFO][4007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gbf9j" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:28.535159 containerd[1462]: 2025-05-17 03:48:28.511 [INFO][4007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gbf9j" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"5d31f0bb-0747-4e8f-868a-d7b2d8faa68d", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988", Pod:"goldmane-78d55f7ddc-gbf9j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7ee7ecac93c", MAC:"12:7b:aa:78:53:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:28.535159 containerd[1462]: 2025-05-17 03:48:28.527 [INFO][4007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988" Namespace="calico-system" Pod="goldmane-78d55f7ddc-gbf9j" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:28.560566 containerd[1462]: time="2025-05-17T03:48:28.560240810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:48:28.560566 containerd[1462]: time="2025-05-17T03:48:28.560309861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:48:28.560566 containerd[1462]: time="2025-05-17T03:48:28.560329150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:28.560566 containerd[1462]: time="2025-05-17T03:48:28.560424985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:28.580158 containerd[1462]: time="2025-05-17T03:48:28.580105839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d56654d85-kd7gz,Uid:2a3cbd78-bd6f-48be-a6de-d94293efa7ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"1fbaac4ff40ee6bedfc5e469ba4183c373588d24e0b0e731d83741174330b982\"" May 17 03:48:28.585745 containerd[1462]: time="2025-05-17T03:48:28.585614564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 03:48:28.589408 systemd[1]: Started cri-containerd-3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988.scope - libcontainer container 3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988. May 17 03:48:28.633235 containerd[1462]: time="2025-05-17T03:48:28.633138071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-gbf9j,Uid:5d31f0bb-0747-4e8f-868a-d7b2d8faa68d,Namespace:calico-system,Attempt:1,} returns sandbox id \"3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988\"" May 17 03:48:28.731616 kubelet[2567]: I0517 03:48:28.729428 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kqtzv" podStartSLOduration=44.729405938 podStartE2EDuration="44.729405938s" podCreationTimestamp="2025-05-17 03:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 03:48:28.700375649 +0000 UTC m=+48.665960953" watchObservedRunningTime="2025-05-17 03:48:28.729405938 +0000 UTC m=+48.694991212" May 17 03:48:28.870458 systemd[1]: run-netns-cni\x2dde17d9dc\x2d31c5\x2d1dc6\x2df0f4\x2d43806a9f5ff2.mount: Deactivated successfully. May 17 03:48:28.974465 containerd[1462]: time="2025-05-17T03:48:28.974348355Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:48:28.976000 containerd[1462]: time="2025-05-17T03:48:28.975966667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:48:28.976169 containerd[1462]: time="2025-05-17T03:48:28.976114930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 03:48:28.976547 kubelet[2567]: E0517 03:48:28.976435 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:48:28.976666 kubelet[2567]: E0517 03:48:28.976591 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:48:28.977173 containerd[1462]: time="2025-05-17T03:48:28.977148379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 03:48:28.982700 kubelet[2567]: E0517 03:48:28.982224 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d884af590bea4bba8c65a41c6bf35a3a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:48:29.151727 containerd[1462]: time="2025-05-17T03:48:29.151625954Z" level=info msg="StopPodSandbox for \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\"" May 17 03:48:29.153913 containerd[1462]: time="2025-05-17T03:48:29.153299597Z" level=info msg="StopPodSandbox for \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\"" May 17 03:48:29.154114 containerd[1462]: time="2025-05-17T03:48:29.154077743Z" level=info msg="StopPodSandbox for \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\"" May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.309 [INFO][4170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.309 [INFO][4170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" iface="eth0" netns="/var/run/netns/cni-1c0b2731-163b-9054-96ee-2649058eaf0f" May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.310 [INFO][4170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" iface="eth0" netns="/var/run/netns/cni-1c0b2731-163b-9054-96ee-2649058eaf0f" May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.314 [INFO][4170] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" iface="eth0" netns="/var/run/netns/cni-1c0b2731-163b-9054-96ee-2649058eaf0f" May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.314 [INFO][4170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.314 [INFO][4170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.345 [INFO][4207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" HandleID="k8s-pod-network.bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.345 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.346 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.356 [WARNING][4207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" HandleID="k8s-pod-network.bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.357 [INFO][4207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" HandleID="k8s-pod-network.bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.361 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:29.367382 containerd[1462]: 2025-05-17 03:48:29.364 [INFO][4170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:29.373651 containerd[1462]: time="2025-05-17T03:48:29.370616274Z" level=info msg="TearDown network for sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\" successfully" May 17 03:48:29.373651 containerd[1462]: time="2025-05-17T03:48:29.371257751Z" level=info msg="StopPodSandbox for \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\" returns successfully" May 17 03:48:29.371750 systemd[1]: run-netns-cni\x2d1c0b2731\x2d163b\x2d9054\x2d96ee\x2d2649058eaf0f.mount: Deactivated successfully. May 17 03:48:29.376302 containerd[1462]: time="2025-05-17T03:48:29.375872940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67566d8f66-fbrbb,Uid:9f1bd9b0-7eec-4bda-8bf7-c4484df07375,Namespace:calico-apiserver,Attempt:1,}" May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.312 [INFO][4190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.313 [INFO][4190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" iface="eth0" netns="/var/run/netns/cni-1ccb2c2c-062d-37ad-271c-546d84c0cfc6" May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.313 [INFO][4190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" iface="eth0" netns="/var/run/netns/cni-1ccb2c2c-062d-37ad-271c-546d84c0cfc6" May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.313 [INFO][4190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" iface="eth0" netns="/var/run/netns/cni-1ccb2c2c-062d-37ad-271c-546d84c0cfc6" May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.313 [INFO][4190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.313 [INFO][4190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.365 [INFO][4210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" HandleID="k8s-pod-network.4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.366 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.366 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.381 [WARNING][4210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" HandleID="k8s-pod-network.4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.381 [INFO][4210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" HandleID="k8s-pod-network.4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.384 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:29.395358 containerd[1462]: 2025-05-17 03:48:29.387 [INFO][4190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:29.399674 containerd[1462]: time="2025-05-17T03:48:29.396103803Z" level=info msg="TearDown network for sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\" successfully" May 17 03:48:29.399674 containerd[1462]: time="2025-05-17T03:48:29.396157562Z" level=info msg="StopPodSandbox for \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\" returns successfully" May 17 03:48:29.401486 systemd[1]: run-netns-cni\x2d1ccb2c2c\x2d062d\x2d37ad\x2d271c\x2d546d84c0cfc6.mount: Deactivated successfully. May 17 03:48:29.402394 containerd[1462]: time="2025-05-17T03:48:29.402241224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wsfgb,Uid:97639112-d662-401e-9525-ef6c5cfa2196,Namespace:kube-system,Attempt:1,}" May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.303 [INFO][4189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.304 [INFO][4189] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" iface="eth0" netns="/var/run/netns/cni-e4219630-e446-dbd9-d797-1ec2eb52a389" May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.304 [INFO][4189] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" iface="eth0" netns="/var/run/netns/cni-e4219630-e446-dbd9-d797-1ec2eb52a389" May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.304 [INFO][4189] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" iface="eth0" netns="/var/run/netns/cni-e4219630-e446-dbd9-d797-1ec2eb52a389" May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.304 [INFO][4189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.305 [INFO][4189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.370 [INFO][4204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" HandleID="k8s-pod-network.58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.373 [INFO][4204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.384 [INFO][4204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.393 [WARNING][4204] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" HandleID="k8s-pod-network.58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.393 [INFO][4204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" HandleID="k8s-pod-network.58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.400 [INFO][4204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:29.407426 containerd[1462]: 2025-05-17 03:48:29.403 [INFO][4189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:29.410122 containerd[1462]: time="2025-05-17T03:48:29.409261296Z" level=info msg="TearDown network for sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\" successfully" May 17 03:48:29.410122 containerd[1462]: time="2025-05-17T03:48:29.409302159Z" level=info msg="StopPodSandbox for \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\" returns successfully" May 17 03:48:29.412270 containerd[1462]: time="2025-05-17T03:48:29.411592799Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:48:29.416083 systemd[1]: run-netns-cni\x2de4219630\x2de446\x2ddbd9\x2dd797\x2d1ec2eb52a389.mount: Deactivated successfully. May 17 03:48:29.418631 containerd[1462]: time="2025-05-17T03:48:29.418598501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67566d8f66-qmgwp,Uid:08766a61-c1c3-45ec-a870-662027187849,Namespace:calico-apiserver,Attempt:1,}" May 17 03:48:29.419373 containerd[1462]: time="2025-05-17T03:48:29.418805022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:48:29.419373 containerd[1462]: time="2025-05-17T03:48:29.418809251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 03:48:29.419540 kubelet[2567]: E0517 03:48:29.419489 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:48:29.419654 kubelet[2567]: E0517 03:48:29.419552 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:48:29.419966 kubelet[2567]: E0517 03:48:29.419874 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccdp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gbf9j_calico-system(5d31f0bb-0747-4e8f-868a-d7b2d8faa68d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:48:29.420894 containerd[1462]: time="2025-05-17T03:48:29.420854680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 03:48:29.421329 kubelet[2567]: E0517 03:48:29.421144 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:48:29.556562 systemd-networkd[1368]: calif86826a9ad5: Gained IPv6LL May 17 03:48:29.636295 systemd-networkd[1368]: cali89efc8cce05: Link UP May 17 03:48:29.638452 systemd-networkd[1368]: cali89efc8cce05: Gained carrier May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.474 [INFO][4224] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.505 [INFO][4224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0 calico-apiserver-67566d8f66- calico-apiserver 9f1bd9b0-7eec-4bda-8bf7-c4484df07375 985 0 2025-05-17 03:47:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67566d8f66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-2f0bbd4ac2.novalocal calico-apiserver-67566d8f66-fbrbb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89efc8cce05 [] [] }} ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-fbrbb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.506 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-fbrbb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.557 [INFO][4259] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" HandleID="k8s-pod-network.7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.557 [INFO][4259] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" HandleID="k8s-pod-network.7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfda0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-2f0bbd4ac2.novalocal", "pod":"calico-apiserver-67566d8f66-fbrbb", "timestamp":"2025-05-17 03:48:29.557130105 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.557 [INFO][4259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.557 [INFO][4259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.557 [INFO][4259] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2f0bbd4ac2.novalocal' May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.573 [INFO][4259] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.584 [INFO][4259] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.595 [INFO][4259] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.598 [INFO][4259] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.601 [INFO][4259] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.601 [INFO][4259] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.605 [INFO][4259] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.613 [INFO][4259] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.628 [INFO][4259] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.196/26] block=192.168.16.192/26 handle="k8s-pod-network.7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.628 [INFO][4259] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.196/26] handle="k8s-pod-network.7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.628 [INFO][4259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:29.664653 containerd[1462]: 2025-05-17 03:48:29.628 [INFO][4259] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.196/26] IPv6=[] ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" HandleID="k8s-pod-network.7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:29.665619 containerd[1462]: 2025-05-17 03:48:29.632 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-fbrbb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0", GenerateName:"calico-apiserver-67566d8f66-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f1bd9b0-7eec-4bda-8bf7-c4484df07375", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67566d8f66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"", Pod:"calico-apiserver-67566d8f66-fbrbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89efc8cce05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:29.665619 containerd[1462]: 2025-05-17 03:48:29.632 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.196/32] ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-fbrbb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:29.665619 containerd[1462]: 2025-05-17 03:48:29.632 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89efc8cce05 ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-fbrbb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:29.665619 containerd[1462]: 2025-05-17 03:48:29.640 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-fbrbb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:29.665619 containerd[1462]: 2025-05-17 03:48:29.641 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-fbrbb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0", GenerateName:"calico-apiserver-67566d8f66-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f1bd9b0-7eec-4bda-8bf7-c4484df07375", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67566d8f66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a", Pod:"calico-apiserver-67566d8f66-fbrbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89efc8cce05", MAC:"a2:08:e9:cb:44:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:29.665619 containerd[1462]: 2025-05-17 03:48:29.662 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-fbrbb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:29.690806 kubelet[2567]: E0517 03:48:29.690720 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:48:29.695098 containerd[1462]: time="2025-05-17T03:48:29.694308600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:48:29.695391 containerd[1462]: time="2025-05-17T03:48:29.694923474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:48:29.695391 containerd[1462]: time="2025-05-17T03:48:29.694948605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:29.695391 containerd[1462]: time="2025-05-17T03:48:29.695053098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:29.720957 systemd[1]: Started cri-containerd-7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a.scope - libcontainer container 7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a. May 17 03:48:29.768271 systemd-networkd[1368]: caliaf1e319f968: Link UP May 17 03:48:29.769835 systemd-networkd[1368]: caliaf1e319f968: Gained carrier May 17 03:48:29.776764 containerd[1462]: time="2025-05-17T03:48:29.776705486Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:48:29.782256 containerd[1462]: time="2025-05-17T03:48:29.781910147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:48:29.782256 containerd[1462]: time="2025-05-17T03:48:29.782029631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 03:48:29.784615 kubelet[2567]: E0517 03:48:29.782822 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:48:29.784615 kubelet[2567]: E0517 03:48:29.783387 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:48:29.784615 kubelet[2567]: E0517 03:48:29.783563 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:48:29.785503 kubelet[2567]: E0517 03:48:29.785396 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:48:29.812380 systemd-networkd[1368]: cali7ee7ecac93c: Gained IPv6LL May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.497 [INFO][4233] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.521 [INFO][4233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0 coredns-674b8bbfcf- kube-system 97639112-d662-401e-9525-ef6c5cfa2196 986 0 2025-05-17 03:47:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-2f0bbd4ac2.novalocal coredns-674b8bbfcf-wsfgb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaf1e319f968 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfgb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.521 [INFO][4233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfgb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.582 [INFO][4265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" HandleID="k8s-pod-network.bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.583 [INFO][4265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" HandleID="k8s-pod-network.bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-2f0bbd4ac2.novalocal", "pod":"coredns-674b8bbfcf-wsfgb", "timestamp":"2025-05-17 03:48:29.582477539 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.586 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.629 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.630 [INFO][4265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2f0bbd4ac2.novalocal' May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.675 [INFO][4265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.691 [INFO][4265] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.710 [INFO][4265] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.724 [INFO][4265] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.730 [INFO][4265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.730 [INFO][4265] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.733 [INFO][4265] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648 May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.741 [INFO][4265] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.754 [INFO][4265] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.197/26] block=192.168.16.192/26 handle="k8s-pod-network.bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.754 [INFO][4265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.197/26] handle="k8s-pod-network.bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.755 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:29.826273 containerd[1462]: 2025-05-17 03:48:29.755 [INFO][4265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.197/26] IPv6=[] ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" HandleID="k8s-pod-network.bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:29.827577 containerd[1462]: 2025-05-17 03:48:29.761 [INFO][4233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfgb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"97639112-d662-401e-9525-ef6c5cfa2196", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"", Pod:"coredns-674b8bbfcf-wsfgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf1e319f968", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:29.827577 containerd[1462]: 2025-05-17 03:48:29.761 [INFO][4233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.197/32] ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfgb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:29.827577 containerd[1462]: 2025-05-17 03:48:29.761 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf1e319f968 ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfgb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:29.827577 containerd[1462]: 2025-05-17 03:48:29.773 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfgb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:29.827577 containerd[1462]: 2025-05-17 03:48:29.774 [INFO][4233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfgb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"97639112-d662-401e-9525-ef6c5cfa2196", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648", Pod:"coredns-674b8bbfcf-wsfgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf1e319f968", MAC:"1a:8e:10:56:97:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:29.827577 containerd[1462]: 2025-05-17 03:48:29.810 [INFO][4233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfgb" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:29.846618 containerd[1462]: time="2025-05-17T03:48:29.846090326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67566d8f66-fbrbb,Uid:9f1bd9b0-7eec-4bda-8bf7-c4484df07375,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a\"" May 17 03:48:29.851032 containerd[1462]: time="2025-05-17T03:48:29.850975937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 03:48:29.880725 systemd-networkd[1368]: cali9c6b5be0214: Link UP May 17 03:48:29.883250 systemd-networkd[1368]: cali9c6b5be0214: Gained carrier May 17 03:48:29.893350 containerd[1462]: time="2025-05-17T03:48:29.893063878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:48:29.893350 containerd[1462]: time="2025-05-17T03:48:29.893121495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:48:29.893350 containerd[1462]: time="2025-05-17T03:48:29.893135073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:29.893812 containerd[1462]: time="2025-05-17T03:48:29.893742301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.499 [INFO][4242] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.524 [INFO][4242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0 calico-apiserver-67566d8f66- calico-apiserver 08766a61-c1c3-45ec-a870-662027187849 984 0 2025-05-17 03:47:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67566d8f66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-2f0bbd4ac2.novalocal calico-apiserver-67566d8f66-qmgwp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9c6b5be0214 [] [] }} ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-qmgwp" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.524 [INFO][4242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-qmgwp" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.594 [INFO][4270] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" HandleID="k8s-pod-network.f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.594 [INFO][4270] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" HandleID="k8s-pod-network.f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048d570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-2f0bbd4ac2.novalocal", "pod":"calico-apiserver-67566d8f66-qmgwp", "timestamp":"2025-05-17 03:48:29.594556985 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.594 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.755 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.755 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2f0bbd4ac2.novalocal' May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.779 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.797 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.813 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.829 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.834 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.834 [INFO][4270] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.837 [INFO][4270] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2 May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.847 [INFO][4270] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.867 [INFO][4270] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.198/26] block=192.168.16.192/26 handle="k8s-pod-network.f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.867 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.198/26] handle="k8s-pod-network.f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.867 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:29.911085 containerd[1462]: 2025-05-17 03:48:29.867 [INFO][4270] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.198/26] IPv6=[] ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" HandleID="k8s-pod-network.f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:29.912001 containerd[1462]: 2025-05-17 03:48:29.873 [INFO][4242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-qmgwp" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0", GenerateName:"calico-apiserver-67566d8f66-", Namespace:"calico-apiserver", SelfLink:"", UID:"08766a61-c1c3-45ec-a870-662027187849", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67566d8f66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"", Pod:"calico-apiserver-67566d8f66-qmgwp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c6b5be0214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:29.912001 containerd[1462]: 2025-05-17 03:48:29.873 [INFO][4242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.198/32] ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-qmgwp" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:29.912001 containerd[1462]: 2025-05-17 03:48:29.873 [INFO][4242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c6b5be0214 ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-qmgwp" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:29.912001 containerd[1462]: 2025-05-17 03:48:29.883 [INFO][4242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-qmgwp" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:29.912001 containerd[1462]: 2025-05-17 03:48:29.884 [INFO][4242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-qmgwp" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0", GenerateName:"calico-apiserver-67566d8f66-", Namespace:"calico-apiserver", SelfLink:"", UID:"08766a61-c1c3-45ec-a870-662027187849", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67566d8f66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2", Pod:"calico-apiserver-67566d8f66-qmgwp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c6b5be0214", MAC:"4a:f2:ac:5c:62:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:29.912001 containerd[1462]: 2025-05-17 03:48:29.906 [INFO][4242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2" Namespace="calico-apiserver" Pod="calico-apiserver-67566d8f66-qmgwp" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:29.941265 systemd[1]: run-containerd-runc-k8s.io-bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648-runc.wfHUkE.mount: Deactivated successfully. May 17 03:48:29.955447 systemd[1]: Started cri-containerd-bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648.scope - libcontainer container bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648. May 17 03:48:29.969915 containerd[1462]: time="2025-05-17T03:48:29.969577390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:48:29.969915 containerd[1462]: time="2025-05-17T03:48:29.969653675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:48:29.969915 containerd[1462]: time="2025-05-17T03:48:29.969672133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:29.969915 containerd[1462]: time="2025-05-17T03:48:29.969772077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:30.001589 systemd[1]: Started cri-containerd-f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2.scope - libcontainer container f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2. May 17 03:48:30.024707 containerd[1462]: time="2025-05-17T03:48:30.024489037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wsfgb,Uid:97639112-d662-401e-9525-ef6c5cfa2196,Namespace:kube-system,Attempt:1,} returns sandbox id \"bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648\"" May 17 03:48:30.049971 containerd[1462]: time="2025-05-17T03:48:30.049826850Z" level=info msg="CreateContainer within sandbox \"bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 03:48:30.088485 containerd[1462]: time="2025-05-17T03:48:30.087463173Z" level=info msg="CreateContainer within sandbox \"bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed84ed740274b88331a39806eea0726273a3e4148a4e900155bb3d5d3d711b1c\"" May 17 03:48:30.090552 containerd[1462]: time="2025-05-17T03:48:30.090507763Z" level=info msg="StartContainer for \"ed84ed740274b88331a39806eea0726273a3e4148a4e900155bb3d5d3d711b1c\"" May 17 03:48:30.139961 containerd[1462]: time="2025-05-17T03:48:30.139835041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67566d8f66-qmgwp,Uid:08766a61-c1c3-45ec-a870-662027187849,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2\"" May 17 03:48:30.156212 containerd[1462]: time="2025-05-17T03:48:30.155894792Z" level=info msg="StopPodSandbox for \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\"" May 17 03:48:30.177391 systemd[1]: Started cri-containerd-ed84ed740274b88331a39806eea0726273a3e4148a4e900155bb3d5d3d711b1c.scope - libcontainer container ed84ed740274b88331a39806eea0726273a3e4148a4e900155bb3d5d3d711b1c. May 17 03:48:30.234669 containerd[1462]: time="2025-05-17T03:48:30.234599789Z" level=info msg="StartContainer for \"ed84ed740274b88331a39806eea0726273a3e4148a4e900155bb3d5d3d711b1c\" returns successfully" May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.264 [INFO][4531] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.265 [INFO][4531] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" iface="eth0" netns="/var/run/netns/cni-64792337-d723-0a8c-c0ab-fecf9ab9d8f2" May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.266 [INFO][4531] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" iface="eth0" netns="/var/run/netns/cni-64792337-d723-0a8c-c0ab-fecf9ab9d8f2" May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.266 [INFO][4531] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" iface="eth0" netns="/var/run/netns/cni-64792337-d723-0a8c-c0ab-fecf9ab9d8f2" May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.266 [INFO][4531] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.266 [INFO][4531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.328 [INFO][4554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" HandleID="k8s-pod-network.33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.329 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.329 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.338 [WARNING][4554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" HandleID="k8s-pod-network.33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.338 [INFO][4554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" HandleID="k8s-pod-network.33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.341 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:30.355401 containerd[1462]: 2025-05-17 03:48:30.345 [INFO][4531] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:30.359645 containerd[1462]: time="2025-05-17T03:48:30.357062714Z" level=info msg="TearDown network for sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\" successfully" May 17 03:48:30.359645 containerd[1462]: time="2025-05-17T03:48:30.357112425Z" level=info msg="StopPodSandbox for \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\" returns successfully" May 17 03:48:30.363563 containerd[1462]: time="2025-05-17T03:48:30.363241254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw9vx,Uid:ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134,Namespace:calico-system,Attempt:1,}" May 17 03:48:30.388463 systemd-networkd[1368]: calia89077813ee: Gained IPv6LL May 17 03:48:30.627366 systemd-networkd[1368]: cali82266658f3d: Link UP May 17 03:48:30.629372 systemd-networkd[1368]: cali82266658f3d: Gained carrier May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.428 [INFO][4563] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.445 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0 csi-node-driver- calico-system ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134 1013 0 2025-05-17 03:48:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-2f0bbd4ac2.novalocal csi-node-driver-kw9vx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali82266658f3d [] [] }} ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Namespace="calico-system" Pod="csi-node-driver-kw9vx" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.446 [INFO][4563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Namespace="calico-system" Pod="csi-node-driver-kw9vx" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.524 [INFO][4576] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" HandleID="k8s-pod-network.e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.525 [INFO][4576] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" HandleID="k8s-pod-network.e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-2f0bbd4ac2.novalocal", "pod":"csi-node-driver-kw9vx", "timestamp":"2025-05-17 03:48:30.523966077 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.525 [INFO][4576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.526 [INFO][4576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.526 [INFO][4576] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2f0bbd4ac2.novalocal' May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.547 [INFO][4576] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.568 [INFO][4576] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.581 [INFO][4576] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.585 [INFO][4576] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.591 [INFO][4576] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.591 [INFO][4576] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.595 [INFO][4576] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389 May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.603 [INFO][4576] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.614 [INFO][4576] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.199/26] block=192.168.16.192/26 handle="k8s-pod-network.e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.614 [INFO][4576] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.199/26] handle="k8s-pod-network.e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.614 [INFO][4576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:30.694886 containerd[1462]: 2025-05-17 03:48:30.614 [INFO][4576] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.199/26] IPv6=[] ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" HandleID="k8s-pod-network.e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:30.697608 containerd[1462]: 2025-05-17 03:48:30.621 [INFO][4563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Namespace="calico-system" Pod="csi-node-driver-kw9vx" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"", Pod:"csi-node-driver-kw9vx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali82266658f3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:30.697608 containerd[1462]: 2025-05-17 03:48:30.621 [INFO][4563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.199/32] ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Namespace="calico-system" Pod="csi-node-driver-kw9vx" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:30.697608 containerd[1462]: 2025-05-17 03:48:30.621 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82266658f3d ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Namespace="calico-system" Pod="csi-node-driver-kw9vx" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:30.697608 containerd[1462]: 2025-05-17 03:48:30.630 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Namespace="calico-system" Pod="csi-node-driver-kw9vx" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:30.697608 containerd[1462]: 2025-05-17 03:48:30.631 [INFO][4563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Namespace="calico-system" Pod="csi-node-driver-kw9vx" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389", Pod:"csi-node-driver-kw9vx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali82266658f3d", MAC:"ba:f9:4e:92:da:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:30.697608 containerd[1462]: 2025-05-17 03:48:30.685 [INFO][4563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389" Namespace="calico-system" Pod="csi-node-driver-kw9vx" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:30.803434 containerd[1462]: time="2025-05-17T03:48:30.800414492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:48:30.803434 containerd[1462]: time="2025-05-17T03:48:30.800510758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:48:30.803434 containerd[1462]: time="2025-05-17T03:48:30.800530348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:30.803434 containerd[1462]: time="2025-05-17T03:48:30.800633588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:30.816425 kubelet[2567]: E0517 03:48:30.812507 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:48:30.871047 systemd[1]: run-netns-cni\x2d64792337\x2dd723\x2d0a8c\x2dc0ab\x2dfecf9ab9d8f2.mount: Deactivated successfully. May 17 03:48:30.880211 kubelet[2567]: I0517 03:48:30.879433 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wsfgb" podStartSLOduration=46.879406843 podStartE2EDuration="46.879406843s" podCreationTimestamp="2025-05-17 03:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 03:48:30.837922375 +0000 UTC m=+50.803507659" watchObservedRunningTime="2025-05-17 03:48:30.879406843 +0000 UTC m=+50.844992148" May 17 03:48:30.892467 systemd[1]: Started cri-containerd-e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389.scope - libcontainer container e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389. May 17 03:48:30.972724 containerd[1462]: time="2025-05-17T03:48:30.972084191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw9vx,Uid:ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134,Namespace:calico-system,Attempt:1,} returns sandbox id \"e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389\"" May 17 03:48:31.150931 containerd[1462]: time="2025-05-17T03:48:31.150387853Z" level=info msg="StopPodSandbox for \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\"" May 17 03:48:31.241232 kernel: bpftool[4683]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.244 [INFO][4667] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.244 [INFO][4667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" iface="eth0" netns="/var/run/netns/cni-94e05ebe-36b2-6bf3-a221-326fdafd748e" May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.244 [INFO][4667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" iface="eth0" netns="/var/run/netns/cni-94e05ebe-36b2-6bf3-a221-326fdafd748e" May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.245 [INFO][4667] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" iface="eth0" netns="/var/run/netns/cni-94e05ebe-36b2-6bf3-a221-326fdafd748e" May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.245 [INFO][4667] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.245 [INFO][4667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.291 [INFO][4685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" HandleID="k8s-pod-network.151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.291 [INFO][4685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.291 [INFO][4685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.302 [WARNING][4685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" HandleID="k8s-pod-network.151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.302 [INFO][4685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" HandleID="k8s-pod-network.151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.304 [INFO][4685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:31.308001 containerd[1462]: 2025-05-17 03:48:31.306 [INFO][4667] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:31.311541 containerd[1462]: time="2025-05-17T03:48:31.309306012Z" level=info msg="TearDown network for sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\" successfully" May 17 03:48:31.311541 containerd[1462]: time="2025-05-17T03:48:31.309336845Z" level=info msg="StopPodSandbox for \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\" returns successfully" May 17 03:48:31.311541 containerd[1462]: time="2025-05-17T03:48:31.309976565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bbf4dcdfc-vlsx4,Uid:c7325861-b11f-4c03-8427-1ec9f970f69e,Namespace:calico-system,Attempt:1,}" May 17 03:48:31.314276 systemd[1]: run-netns-cni\x2d94e05ebe\x2d36b2\x2d6bf3\x2da221\x2d326fdafd748e.mount: Deactivated successfully. May 17 03:48:31.412589 systemd-networkd[1368]: cali89efc8cce05: Gained IPv6LL May 17 03:48:31.412902 systemd-networkd[1368]: cali9c6b5be0214: Gained IPv6LL May 17 03:48:31.540736 systemd-networkd[1368]: caliaf1e319f968: Gained IPv6LL May 17 03:48:31.582449 systemd-networkd[1368]: cali1876ada1b85: Link UP May 17 03:48:31.584337 systemd-networkd[1368]: cali1876ada1b85: Gained carrier May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.410 [INFO][4692] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0 calico-kube-controllers-bbf4dcdfc- calico-system c7325861-b11f-4c03-8427-1ec9f970f69e 1036 0 2025-05-17 03:48:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bbf4dcdfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-2f0bbd4ac2.novalocal calico-kube-controllers-bbf4dcdfc-vlsx4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1876ada1b85 [] [] }} ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Namespace="calico-system" Pod="calico-kube-controllers-bbf4dcdfc-vlsx4" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.411 [INFO][4692] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Namespace="calico-system" Pod="calico-kube-controllers-bbf4dcdfc-vlsx4" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.491 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" HandleID="k8s-pod-network.36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.492 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" HandleID="k8s-pod-network.36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-2f0bbd4ac2.novalocal", "pod":"calico-kube-controllers-bbf4dcdfc-vlsx4", "timestamp":"2025-05-17 03:48:31.490540057 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.492 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.492 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.492 [INFO][4703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2f0bbd4ac2.novalocal' May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.507 [INFO][4703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.520 [INFO][4703] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.527 [INFO][4703] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.532 [INFO][4703] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.536 [INFO][4703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.536 [INFO][4703] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.538 [INFO][4703] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317 May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.545 [INFO][4703] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.557 [INFO][4703] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.200/26] block=192.168.16.192/26 handle="k8s-pod-network.36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.557 [INFO][4703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.200/26] handle="k8s-pod-network.36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" host="ci-4081-3-3-n-2f0bbd4ac2.novalocal" May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.557 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:31.604660 containerd[1462]: 2025-05-17 03:48:31.557 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.200/26] IPv6=[] ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" HandleID="k8s-pod-network.36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:31.605489 containerd[1462]: 2025-05-17 03:48:31.564 [INFO][4692] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Namespace="calico-system" Pod="calico-kube-controllers-bbf4dcdfc-vlsx4" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0", GenerateName:"calico-kube-controllers-bbf4dcdfc-", Namespace:"calico-system", SelfLink:"", UID:"c7325861-b11f-4c03-8427-1ec9f970f69e", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bbf4dcdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"", Pod:"calico-kube-controllers-bbf4dcdfc-vlsx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1876ada1b85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:31.605489 containerd[1462]: 2025-05-17 03:48:31.564 [INFO][4692] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.200/32] ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Namespace="calico-system" Pod="calico-kube-controllers-bbf4dcdfc-vlsx4" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:31.605489 containerd[1462]: 2025-05-17 03:48:31.564 [INFO][4692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1876ada1b85 ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Namespace="calico-system" Pod="calico-kube-controllers-bbf4dcdfc-vlsx4" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:31.605489 containerd[1462]: 2025-05-17 03:48:31.582 [INFO][4692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Namespace="calico-system" Pod="calico-kube-controllers-bbf4dcdfc-vlsx4" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:31.605489 containerd[1462]: 2025-05-17 03:48:31.584 [INFO][4692] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Namespace="calico-system" Pod="calico-kube-controllers-bbf4dcdfc-vlsx4" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0", GenerateName:"calico-kube-controllers-bbf4dcdfc-", Namespace:"calico-system", SelfLink:"", UID:"c7325861-b11f-4c03-8427-1ec9f970f69e", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bbf4dcdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317", Pod:"calico-kube-controllers-bbf4dcdfc-vlsx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1876ada1b85", MAC:"aa:de:d9:f8:a8:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:31.605489 containerd[1462]: 2025-05-17 03:48:31.601 [INFO][4692] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317" Namespace="calico-system" Pod="calico-kube-controllers-bbf4dcdfc-vlsx4" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:31.641113 containerd[1462]: time="2025-05-17T03:48:31.640633006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 03:48:31.641113 containerd[1462]: time="2025-05-17T03:48:31.640720985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 03:48:31.641113 containerd[1462]: time="2025-05-17T03:48:31.640741586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:31.641113 containerd[1462]: time="2025-05-17T03:48:31.640876381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 03:48:31.688456 systemd[1]: Started cri-containerd-36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317.scope - libcontainer container 36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317. May 17 03:48:31.733826 systemd-networkd[1368]: cali82266658f3d: Gained IPv6LL May 17 03:48:31.826611 systemd-networkd[1368]: vxlan.calico: Link UP May 17 03:48:31.826622 systemd-networkd[1368]: vxlan.calico: Gained carrier May 17 03:48:31.844149 containerd[1462]: time="2025-05-17T03:48:31.844103968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bbf4dcdfc-vlsx4,Uid:c7325861-b11f-4c03-8427-1ec9f970f69e,Namespace:calico-system,Attempt:1,} returns sandbox id \"36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317\"" May 17 03:48:33.205174 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL May 17 03:48:33.588423 systemd-networkd[1368]: cali1876ada1b85: Gained IPv6LL May 17 03:48:35.146564 containerd[1462]: time="2025-05-17T03:48:35.146465292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:35.149403 containerd[1462]: time="2025-05-17T03:48:35.149294985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 03:48:35.150735 containerd[1462]: time="2025-05-17T03:48:35.150687767Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:35.182554 containerd[1462]: time="2025-05-17T03:48:35.181687672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:35.184777 containerd[1462]: time="2025-05-17T03:48:35.184707972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 5.33342521s" May 17 03:48:35.184858 containerd[1462]: time="2025-05-17T03:48:35.184830550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 03:48:35.189962 containerd[1462]: time="2025-05-17T03:48:35.189894881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 03:48:35.206142 containerd[1462]: time="2025-05-17T03:48:35.205735112Z" level=info msg="CreateContainer within sandbox \"7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 03:48:35.234712 containerd[1462]: time="2025-05-17T03:48:35.234641858Z" level=info msg="CreateContainer within sandbox \"7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cfffbe4a6e63479dfa678a37caa732f9bc7079913461bec2c7652ffacf4e5767\"" May 17 03:48:35.237104 containerd[1462]: time="2025-05-17T03:48:35.237066070Z" level=info msg="StartContainer for \"cfffbe4a6e63479dfa678a37caa732f9bc7079913461bec2c7652ffacf4e5767\"" May 17 03:48:35.293445 systemd[1]: Started cri-containerd-cfffbe4a6e63479dfa678a37caa732f9bc7079913461bec2c7652ffacf4e5767.scope - libcontainer container cfffbe4a6e63479dfa678a37caa732f9bc7079913461bec2c7652ffacf4e5767. May 17 03:48:35.351296 containerd[1462]: time="2025-05-17T03:48:35.350757379Z" level=info msg="StartContainer for \"cfffbe4a6e63479dfa678a37caa732f9bc7079913461bec2c7652ffacf4e5767\" returns successfully" May 17 03:48:35.736347 containerd[1462]: time="2025-05-17T03:48:35.734721022Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:35.738711 containerd[1462]: time="2025-05-17T03:48:35.738586594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 03:48:35.749256 containerd[1462]: time="2025-05-17T03:48:35.749149885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 559.113178ms" May 17 03:48:35.749542 containerd[1462]: time="2025-05-17T03:48:35.749454472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 03:48:35.755028 containerd[1462]: time="2025-05-17T03:48:35.754951409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 03:48:35.763914 containerd[1462]: time="2025-05-17T03:48:35.763844757Z" level=info msg="CreateContainer within sandbox \"f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 03:48:35.806631 containerd[1462]: time="2025-05-17T03:48:35.806519997Z" level=info msg="CreateContainer within sandbox \"f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e3e38a9168c704f555024be0e06fc6b9006a7a008eeb5076e9999c341927981\"" May 17 03:48:35.809101 containerd[1462]: time="2025-05-17T03:48:35.807533019Z" level=info msg="StartContainer for \"1e3e38a9168c704f555024be0e06fc6b9006a7a008eeb5076e9999c341927981\"" May 17 03:48:35.863560 systemd[1]: Started cri-containerd-1e3e38a9168c704f555024be0e06fc6b9006a7a008eeb5076e9999c341927981.scope - libcontainer container 1e3e38a9168c704f555024be0e06fc6b9006a7a008eeb5076e9999c341927981. May 17 03:48:35.890361 kubelet[2567]: I0517 03:48:35.889888 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67566d8f66-fbrbb" podStartSLOduration=34.55130228 podStartE2EDuration="39.889643316s" podCreationTimestamp="2025-05-17 03:47:56 +0000 UTC" firstStartedPulling="2025-05-17 03:48:29.850000508 +0000 UTC m=+49.815585782" lastFinishedPulling="2025-05-17 03:48:35.188341544 +0000 UTC m=+55.153926818" observedRunningTime="2025-05-17 03:48:35.888163007 +0000 UTC m=+55.853748311" watchObservedRunningTime="2025-05-17 03:48:35.889643316 +0000 UTC m=+55.855228600" May 17 03:48:35.956068 containerd[1462]: time="2025-05-17T03:48:35.956013272Z" level=info msg="StartContainer for \"1e3e38a9168c704f555024be0e06fc6b9006a7a008eeb5076e9999c341927981\" returns successfully" May 17 03:48:36.880566 kubelet[2567]: I0517 03:48:36.880466 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 03:48:37.884698 kubelet[2567]: I0517 03:48:37.884397 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 03:48:37.946725 containerd[1462]: time="2025-05-17T03:48:37.946648879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:37.948666 containerd[1462]: time="2025-05-17T03:48:37.948221189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 03:48:37.950841 containerd[1462]: time="2025-05-17T03:48:37.950329313Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:37.954243 containerd[1462]: time="2025-05-17T03:48:37.954179161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:37.955143 containerd[1462]: time="2025-05-17T03:48:37.955104402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.199272209s" May 17 03:48:37.955255 containerd[1462]: time="2025-05-17T03:48:37.955151978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 03:48:37.958233 containerd[1462]: time="2025-05-17T03:48:37.958182278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 03:48:37.965567 containerd[1462]: time="2025-05-17T03:48:37.965518678Z" level=info msg="CreateContainer within sandbox \"e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 03:48:37.994836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2841569936.mount: Deactivated successfully. May 17 03:48:37.998862 containerd[1462]: time="2025-05-17T03:48:37.998809799Z" level=info msg="CreateContainer within sandbox \"e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"107267dde0d9a44df06643365641adb408a2d7bca819d953aeb8f53b325a6124\"" May 17 03:48:38.001357 containerd[1462]: time="2025-05-17T03:48:38.001311318Z" level=info msg="StartContainer for \"107267dde0d9a44df06643365641adb408a2d7bca819d953aeb8f53b325a6124\"" May 17 03:48:38.050031 systemd[1]: run-containerd-runc-k8s.io-107267dde0d9a44df06643365641adb408a2d7bca819d953aeb8f53b325a6124-runc.F3uRHX.mount: Deactivated successfully. May 17 03:48:38.057351 systemd[1]: Started cri-containerd-107267dde0d9a44df06643365641adb408a2d7bca819d953aeb8f53b325a6124.scope - libcontainer container 107267dde0d9a44df06643365641adb408a2d7bca819d953aeb8f53b325a6124. May 17 03:48:38.105975 containerd[1462]: time="2025-05-17T03:48:38.105918692Z" level=info msg="StartContainer for \"107267dde0d9a44df06643365641adb408a2d7bca819d953aeb8f53b325a6124\" returns successfully" May 17 03:48:40.189603 containerd[1462]: time="2025-05-17T03:48:40.189336360Z" level=info msg="StopPodSandbox for \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\"" May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.255 [WARNING][4988] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0", GenerateName:"calico-kube-controllers-bbf4dcdfc-", Namespace:"calico-system", SelfLink:"", UID:"c7325861-b11f-4c03-8427-1ec9f970f69e", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bbf4dcdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317", Pod:"calico-kube-controllers-bbf4dcdfc-vlsx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1876ada1b85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.255 [INFO][4988] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.255 [INFO][4988] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" iface="eth0" netns="" May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.255 [INFO][4988] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.255 [INFO][4988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.291 [INFO][4995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" HandleID="k8s-pod-network.151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.291 [INFO][4995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.291 [INFO][4995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.303 [WARNING][4995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" HandleID="k8s-pod-network.151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.303 [INFO][4995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" HandleID="k8s-pod-network.151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.307 [INFO][4995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:40.311758 containerd[1462]: 2025-05-17 03:48:40.310 [INFO][4988] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:40.313488 containerd[1462]: time="2025-05-17T03:48:40.311805011Z" level=info msg="TearDown network for sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\" successfully" May 17 03:48:40.313488 containerd[1462]: time="2025-05-17T03:48:40.311836143Z" level=info msg="StopPodSandbox for \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\" returns successfully" May 17 03:48:40.313488 containerd[1462]: time="2025-05-17T03:48:40.312822515Z" level=info msg="RemovePodSandbox for \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\"" May 17 03:48:40.313488 containerd[1462]: time="2025-05-17T03:48:40.312858588Z" level=info msg="Forcibly stopping sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\"" May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.386 [WARNING][5009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0", GenerateName:"calico-kube-controllers-bbf4dcdfc-", Namespace:"calico-system", SelfLink:"", UID:"c7325861-b11f-4c03-8427-1ec9f970f69e", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bbf4dcdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317", Pod:"calico-kube-controllers-bbf4dcdfc-vlsx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1876ada1b85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.387 [INFO][5009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.387 [INFO][5009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" iface="eth0" netns="" May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.387 [INFO][5009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.387 [INFO][5009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.422 [INFO][5016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" HandleID="k8s-pod-network.151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.422 [INFO][5016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.422 [INFO][5016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.432 [WARNING][5016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" HandleID="k8s-pod-network.151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.432 [INFO][5016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" HandleID="k8s-pod-network.151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--kube--controllers--bbf4dcdfc--vlsx4-eth0" May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.434 [INFO][5016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:40.437561 containerd[1462]: 2025-05-17 03:48:40.436 [INFO][5009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4" May 17 03:48:40.438189 containerd[1462]: time="2025-05-17T03:48:40.437608403Z" level=info msg="TearDown network for sandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\" successfully" May 17 03:48:40.442486 containerd[1462]: time="2025-05-17T03:48:40.442131504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 03:48:40.442486 containerd[1462]: time="2025-05-17T03:48:40.442243200Z" level=info msg="RemovePodSandbox \"151778632cff9293903053b04d85e484c86e6b925187321f27c7d2acb33f74e4\" returns successfully" May 17 03:48:40.444412 containerd[1462]: time="2025-05-17T03:48:40.444100138Z" level=info msg="StopPodSandbox for \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\"" May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.501 [WARNING][5030] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0", GenerateName:"calico-apiserver-67566d8f66-", Namespace:"calico-apiserver", SelfLink:"", UID:"08766a61-c1c3-45ec-a870-662027187849", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67566d8f66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2", Pod:"calico-apiserver-67566d8f66-qmgwp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c6b5be0214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.501 [INFO][5030] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.501 [INFO][5030] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" iface="eth0" netns="" May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.501 [INFO][5030] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.501 [INFO][5030] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.532 [INFO][5037] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" HandleID="k8s-pod-network.58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.532 [INFO][5037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.532 [INFO][5037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.544 [WARNING][5037] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" HandleID="k8s-pod-network.58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.544 [INFO][5037] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" HandleID="k8s-pod-network.58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.546 [INFO][5037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:40.549127 containerd[1462]: 2025-05-17 03:48:40.547 [INFO][5030] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:40.550130 containerd[1462]: time="2025-05-17T03:48:40.549982949Z" level=info msg="TearDown network for sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\" successfully" May 17 03:48:40.550130 containerd[1462]: time="2025-05-17T03:48:40.550019672Z" level=info msg="StopPodSandbox for \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\" returns successfully" May 17 03:48:40.551001 containerd[1462]: time="2025-05-17T03:48:40.550640906Z" level=info msg="RemovePodSandbox for \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\"" May 17 03:48:40.551001 containerd[1462]: time="2025-05-17T03:48:40.550668883Z" level=info msg="Forcibly stopping sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\"" May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.617 [WARNING][5051] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0", GenerateName:"calico-apiserver-67566d8f66-", Namespace:"calico-apiserver", SelfLink:"", UID:"08766a61-c1c3-45ec-a870-662027187849", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67566d8f66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"f8ae6840101e98e12b50ef7eea71f19013e72a51e23a2884c247abf828d7f7c2", Pod:"calico-apiserver-67566d8f66-qmgwp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c6b5be0214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.617 [INFO][5051] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.617 [INFO][5051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" iface="eth0" netns="" May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.617 [INFO][5051] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.617 [INFO][5051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.649 [INFO][5058] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" HandleID="k8s-pod-network.58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.649 [INFO][5058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.649 [INFO][5058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.658 [WARNING][5058] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" HandleID="k8s-pod-network.58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.658 [INFO][5058] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" HandleID="k8s-pod-network.58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--qmgwp-eth0" May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.660 [INFO][5058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:40.663029 containerd[1462]: 2025-05-17 03:48:40.661 [INFO][5051] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0" May 17 03:48:40.663029 containerd[1462]: time="2025-05-17T03:48:40.663023076Z" level=info msg="TearDown network for sandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\" successfully" May 17 03:48:40.668183 containerd[1462]: time="2025-05-17T03:48:40.668136769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 03:48:40.668393 containerd[1462]: time="2025-05-17T03:48:40.668238975Z" level=info msg="RemovePodSandbox \"58dd99e379e6af397d3d2592dc329a3efcdcc02e70cc5dd96bb8a6c3f91c1dc0\" returns successfully" May 17 03:48:40.669897 containerd[1462]: time="2025-05-17T03:48:40.669795558Z" level=info msg="StopPodSandbox for \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\"" May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.733 [WARNING][5073] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0", GenerateName:"calico-apiserver-67566d8f66-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f1bd9b0-7eec-4bda-8bf7-c4484df07375", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67566d8f66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a", Pod:"calico-apiserver-67566d8f66-fbrbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89efc8cce05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.734 [INFO][5073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.734 [INFO][5073] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" iface="eth0" netns="" May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.734 [INFO][5073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.734 [INFO][5073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.761 [INFO][5080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" HandleID="k8s-pod-network.bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.761 [INFO][5080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.761 [INFO][5080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.769 [WARNING][5080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" HandleID="k8s-pod-network.bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.770 [INFO][5080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" HandleID="k8s-pod-network.bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.771 [INFO][5080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:40.774928 containerd[1462]: 2025-05-17 03:48:40.773 [INFO][5073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:40.776081 containerd[1462]: time="2025-05-17T03:48:40.775493936Z" level=info msg="TearDown network for sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\" successfully" May 17 03:48:40.776081 containerd[1462]: time="2025-05-17T03:48:40.775535820Z" level=info msg="StopPodSandbox for \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\" returns successfully" May 17 03:48:40.777049 containerd[1462]: time="2025-05-17T03:48:40.776827898Z" level=info msg="RemovePodSandbox for \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\"" May 17 03:48:40.777049 containerd[1462]: time="2025-05-17T03:48:40.776864372Z" level=info msg="Forcibly stopping sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\"" May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.868 [WARNING][5094] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0", GenerateName:"calico-apiserver-67566d8f66-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f1bd9b0-7eec-4bda-8bf7-c4484df07375", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67566d8f66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"7199e38462a06f8d534b00ac7a43c10217fd5c98f3bc67c4379af0aec427c81a", Pod:"calico-apiserver-67566d8f66-fbrbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89efc8cce05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.868 [INFO][5094] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.868 [INFO][5094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" iface="eth0" netns="" May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.868 [INFO][5094] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.868 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.899 [INFO][5105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" HandleID="k8s-pod-network.bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.900 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.900 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.913 [WARNING][5105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" HandleID="k8s-pod-network.bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.913 [INFO][5105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" HandleID="k8s-pod-network.bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-calico--apiserver--67566d8f66--fbrbb-eth0" May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.916 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:40.919480 containerd[1462]: 2025-05-17 03:48:40.918 [INFO][5094] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d" May 17 03:48:40.921390 containerd[1462]: time="2025-05-17T03:48:40.919567990Z" level=info msg="TearDown network for sandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\" successfully" May 17 03:48:40.923987 containerd[1462]: time="2025-05-17T03:48:40.923938743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 03:48:40.924057 containerd[1462]: time="2025-05-17T03:48:40.924038926Z" level=info msg="RemovePodSandbox \"bc3c008a1ec59dad89d659c996a2ef2f303dd1571ad1695b5a99920347d6523d\" returns successfully" May 17 03:48:40.924596 containerd[1462]: time="2025-05-17T03:48:40.924569417Z" level=info msg="StopPodSandbox for \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\"" May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:40.985 [WARNING][5119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"97639112-d662-401e-9525-ef6c5cfa2196", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648", Pod:"coredns-674b8bbfcf-wsfgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf1e319f968", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:40.985 [INFO][5119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:40.985 [INFO][5119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" iface="eth0" netns="" May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:40.985 [INFO][5119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:40.985 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:41.012 [INFO][5126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" HandleID="k8s-pod-network.4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:41.012 [INFO][5126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:41.012 [INFO][5126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:41.021 [WARNING][5126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" HandleID="k8s-pod-network.4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:41.021 [INFO][5126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" HandleID="k8s-pod-network.4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:41.023 [INFO][5126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:41.026256 containerd[1462]: 2025-05-17 03:48:41.024 [INFO][5119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:41.027013 containerd[1462]: time="2025-05-17T03:48:41.026281487Z" level=info msg="TearDown network for sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\" successfully" May 17 03:48:41.027013 containerd[1462]: time="2025-05-17T03:48:41.026333872Z" level=info msg="StopPodSandbox for \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\" returns successfully" May 17 03:48:41.028707 containerd[1462]: time="2025-05-17T03:48:41.028670957Z" level=info msg="RemovePodSandbox for \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\"" May 17 03:48:41.028824 containerd[1462]: time="2025-05-17T03:48:41.028740708Z" level=info msg="Forcibly stopping sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\"" May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.066 [WARNING][5140] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"97639112-d662-401e-9525-ef6c5cfa2196", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"bd56bb5ccfbb2d57789b514987bbf6076fcc4c2b845112e6f3fa96323c280648", Pod:"coredns-674b8bbfcf-wsfgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf1e319f968", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.067 [INFO][5140] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.067 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" iface="eth0" netns="" May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.067 [INFO][5140] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.067 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.091 [INFO][5147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" HandleID="k8s-pod-network.4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.091 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.091 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.098 [WARNING][5147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" HandleID="k8s-pod-network.4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.099 [INFO][5147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" HandleID="k8s-pod-network.4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--wsfgb-eth0" May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.100 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:41.103568 containerd[1462]: 2025-05-17 03:48:41.102 [INFO][5140] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84" May 17 03:48:41.104262 containerd[1462]: time="2025-05-17T03:48:41.103669082Z" level=info msg="TearDown network for sandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\" successfully" May 17 03:48:41.318876 containerd[1462]: time="2025-05-17T03:48:41.318584460Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 03:48:41.318876 containerd[1462]: time="2025-05-17T03:48:41.318668220Z" level=info msg="RemovePodSandbox \"4a2fbccaa7d85c450f2a89a2230be52740e61e18625b3db15c429950d0242d84\" returns successfully" May 17 03:48:41.321153 containerd[1462]: time="2025-05-17T03:48:41.319118628Z" level=info msg="StopPodSandbox for \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\"" May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.405 [WARNING][5166] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"5d31f0bb-0747-4e8f-868a-d7b2d8faa68d", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988", Pod:"goldmane-78d55f7ddc-gbf9j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7ee7ecac93c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.405 [INFO][5166] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.405 [INFO][5166] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" iface="eth0" netns="" May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.405 [INFO][5166] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.405 [INFO][5166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.438 [INFO][5173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" HandleID="k8s-pod-network.d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.438 [INFO][5173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.439 [INFO][5173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.449 [WARNING][5173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" HandleID="k8s-pod-network.d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.449 [INFO][5173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" HandleID="k8s-pod-network.d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.452 [INFO][5173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:41.455350 containerd[1462]: 2025-05-17 03:48:41.453 [INFO][5166] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:41.456905 containerd[1462]: time="2025-05-17T03:48:41.456386077Z" level=info msg="TearDown network for sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\" successfully" May 17 03:48:41.456905 containerd[1462]: time="2025-05-17T03:48:41.456426088Z" level=info msg="StopPodSandbox for \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\" returns successfully" May 17 03:48:41.457558 containerd[1462]: time="2025-05-17T03:48:41.457438119Z" level=info msg="RemovePodSandbox for \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\"" May 17 03:48:41.457947 containerd[1462]: time="2025-05-17T03:48:41.457672963Z" level=info msg="Forcibly stopping sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\"" May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.528 [WARNING][5188] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"5d31f0bb-0747-4e8f-868a-d7b2d8faa68d", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"3102a2b4081eb871f88ff8ea525b1da4fe68482ed74bbeab72f05697ba333988", Pod:"goldmane-78d55f7ddc-gbf9j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7ee7ecac93c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.528 [INFO][5188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.528 [INFO][5188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" iface="eth0" netns="" May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.528 [INFO][5188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.529 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.568 [INFO][5195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" HandleID="k8s-pod-network.d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.569 [INFO][5195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.569 [INFO][5195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.577 [WARNING][5195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" HandleID="k8s-pod-network.d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.577 [INFO][5195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" HandleID="k8s-pod-network.d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-goldmane--78d55f7ddc--gbf9j-eth0" May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.579 [INFO][5195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:41.582419 containerd[1462]: 2025-05-17 03:48:41.580 [INFO][5188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589" May 17 03:48:41.583831 containerd[1462]: time="2025-05-17T03:48:41.583779194Z" level=info msg="TearDown network for sandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\" successfully" May 17 03:48:41.590322 containerd[1462]: time="2025-05-17T03:48:41.590123524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 03:48:41.590418 containerd[1462]: time="2025-05-17T03:48:41.590321834Z" level=info msg="RemovePodSandbox \"d3f815b9ab8dca016ef6e2aea597bf3c0787a850e6b70cc53503210e35d7c589\" returns successfully" May 17 03:48:41.593222 containerd[1462]: time="2025-05-17T03:48:41.593147585Z" level=info msg="StopPodSandbox for \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\"" May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.654 [WARNING][5210] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389", Pod:"csi-node-driver-kw9vx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali82266658f3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.655 [INFO][5210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.655 [INFO][5210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" iface="eth0" netns="" May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.655 [INFO][5210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.655 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.713 [INFO][5217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" HandleID="k8s-pod-network.33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.713 [INFO][5217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.713 [INFO][5217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.728 [WARNING][5217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" HandleID="k8s-pod-network.33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.728 [INFO][5217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" HandleID="k8s-pod-network.33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.732 [INFO][5217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:41.737264 containerd[1462]: 2025-05-17 03:48:41.734 [INFO][5210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:41.737264 containerd[1462]: time="2025-05-17T03:48:41.737077215Z" level=info msg="TearDown network for sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\" successfully" May 17 03:48:41.737264 containerd[1462]: time="2025-05-17T03:48:41.737122205Z" level=info msg="StopPodSandbox for \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\" returns successfully" May 17 03:48:41.738936 containerd[1462]: time="2025-05-17T03:48:41.738658524Z" level=info msg="RemovePodSandbox for \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\"" May 17 03:48:41.738936 containerd[1462]: time="2025-05-17T03:48:41.738690449Z" level=info msg="Forcibly stopping sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\"" May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.811 [WARNING][5231] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ceca80df-b7ce-42e9-b2ed-1cd3aa7b6134", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389", Pod:"csi-node-driver-kw9vx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali82266658f3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.811 [INFO][5231] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.811 [INFO][5231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" iface="eth0" netns="" May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.811 [INFO][5231] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.811 [INFO][5231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.863 [INFO][5238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" HandleID="k8s-pod-network.33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.863 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.863 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.879 [WARNING][5238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" HandleID="k8s-pod-network.33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.879 [INFO][5238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" HandleID="k8s-pod-network.33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-csi--node--driver--kw9vx-eth0" May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.886 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:41.890813 containerd[1462]: 2025-05-17 03:48:41.888 [INFO][5231] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a" May 17 03:48:41.892551 containerd[1462]: time="2025-05-17T03:48:41.890788287Z" level=info msg="TearDown network for sandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\" successfully" May 17 03:48:41.906498 containerd[1462]: time="2025-05-17T03:48:41.906179648Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 03:48:41.906498 containerd[1462]: time="2025-05-17T03:48:41.906283117Z" level=info msg="RemovePodSandbox \"33e00e49e7941a8b7909b29088ef98a8612bb3bc38dedf11e54e100c4602ee2a\" returns successfully" May 17 03:48:41.907758 containerd[1462]: time="2025-05-17T03:48:41.907483138Z" level=info msg="StopPodSandbox for \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\"" May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:41.962 [WARNING][5252] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"100470f3-1018-4b21-81fe-cdd6b96f94f3", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176", Pod:"coredns-674b8bbfcf-kqtzv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif86826a9ad5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:41.963 [INFO][5252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:41.963 [INFO][5252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" iface="eth0" netns="" May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:41.963 [INFO][5252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:41.963 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:41.992 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" HandleID="k8s-pod-network.d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:41.992 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:41.993 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:42.004 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" HandleID="k8s-pod-network.d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:42.004 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" HandleID="k8s-pod-network.d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:42.007 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:42.011580 containerd[1462]: 2025-05-17 03:48:42.008 [INFO][5252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:42.012357 containerd[1462]: time="2025-05-17T03:48:42.011623783Z" level=info msg="TearDown network for sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\" successfully" May 17 03:48:42.012357 containerd[1462]: time="2025-05-17T03:48:42.011654735Z" level=info msg="StopPodSandbox for \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\" returns successfully" May 17 03:48:42.012733 containerd[1462]: time="2025-05-17T03:48:42.012596504Z" level=info msg="RemovePodSandbox for \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\"" May 17 03:48:42.012783 containerd[1462]: time="2025-05-17T03:48:42.012732578Z" level=info msg="Forcibly stopping sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\"" May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.078 [WARNING][5273] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"100470f3-1018-4b21-81fe-cdd6b96f94f3", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 3, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2f0bbd4ac2.novalocal", ContainerID:"7dd270e1400e134671c46cfab8ebddf0a1970f484568850f6cea67e8c8259176", Pod:"coredns-674b8bbfcf-kqtzv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif86826a9ad5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.079 [INFO][5273] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.079 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" iface="eth0" netns="" May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.079 [INFO][5273] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.079 [INFO][5273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.125 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" HandleID="k8s-pod-network.d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.125 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.125 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.137 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" HandleID="k8s-pod-network.d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.137 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" HandleID="k8s-pod-network.d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-coredns--674b8bbfcf--kqtzv-eth0" May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.139 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:42.142557 containerd[1462]: 2025-05-17 03:48:42.141 [INFO][5273] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4" May 17 03:48:42.142557 containerd[1462]: time="2025-05-17T03:48:42.142528447Z" level=info msg="TearDown network for sandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\" successfully" May 17 03:48:42.156431 containerd[1462]: time="2025-05-17T03:48:42.156023387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 03:48:42.156431 containerd[1462]: time="2025-05-17T03:48:42.156134610Z" level=info msg="RemovePodSandbox \"d380a1a4ebede28c5c5b571ceb7e898347b38eb5ceca7d4653496697891b61c4\" returns successfully" May 17 03:48:42.157685 containerd[1462]: time="2025-05-17T03:48:42.157362237Z" level=info msg="StopPodSandbox for \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\"" May 17 03:48:42.177452 kubelet[2567]: I0517 03:48:42.177254 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67566d8f66-qmgwp" podStartSLOduration=40.568511665 podStartE2EDuration="46.177224685s" podCreationTimestamp="2025-05-17 03:47:56 +0000 UTC" firstStartedPulling="2025-05-17 03:48:30.143154972 +0000 UTC m=+50.108740246" lastFinishedPulling="2025-05-17 03:48:35.751867992 +0000 UTC m=+55.717453266" observedRunningTime="2025-05-17 03:48:36.904542861 +0000 UTC m=+56.870128175" watchObservedRunningTime="2025-05-17 03:48:42.177224685 +0000 UTC m=+62.142809959" May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.251 [WARNING][5294] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.252 [INFO][5294] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.252 [INFO][5294] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" iface="eth0" netns="" May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.253 [INFO][5294] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.253 [INFO][5294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.312 [INFO][5301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" HandleID="k8s-pod-network.d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.312 [INFO][5301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.313 [INFO][5301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.325 [WARNING][5301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" HandleID="k8s-pod-network.d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.325 [INFO][5301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" HandleID="k8s-pod-network.d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.328 [INFO][5301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:42.332658 containerd[1462]: 2025-05-17 03:48:42.329 [INFO][5294] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:42.332658 containerd[1462]: time="2025-05-17T03:48:42.331940754Z" level=info msg="TearDown network for sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\" successfully" May 17 03:48:42.332658 containerd[1462]: time="2025-05-17T03:48:42.331976856Z" level=info msg="StopPodSandbox for \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\" returns successfully" May 17 03:48:42.333713 containerd[1462]: time="2025-05-17T03:48:42.333512487Z" level=info msg="RemovePodSandbox for \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\"" May 17 03:48:42.333713 containerd[1462]: time="2025-05-17T03:48:42.333554129Z" level=info msg="Forcibly stopping sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\"" May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.408 [WARNING][5316] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" WorkloadEndpoint="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.409 [INFO][5316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.409 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" iface="eth0" netns="" May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.409 [INFO][5316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.409 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.444 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" HandleID="k8s-pod-network.d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.444 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.444 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.454 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" HandleID="k8s-pod-network.d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.455 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" HandleID="k8s-pod-network.d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" Workload="ci--4081--3--3--n--2f0bbd4ac2.novalocal-k8s-whisker--578fbf45d9--l5h8n-eth0" May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.457 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 03:48:42.459878 containerd[1462]: 2025-05-17 03:48:42.458 [INFO][5316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13" May 17 03:48:42.460623 containerd[1462]: time="2025-05-17T03:48:42.459928088Z" level=info msg="TearDown network for sandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\" successfully" May 17 03:48:42.726368 containerd[1462]: time="2025-05-17T03:48:42.726146661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 03:48:42.726368 containerd[1462]: time="2025-05-17T03:48:42.726272060Z" level=info msg="RemovePodSandbox \"d92db4b747069569589646d0f2c5313d1ddd32eba38d9c35198e3322df1bfe13\" returns successfully" May 17 03:48:42.965973 containerd[1462]: time="2025-05-17T03:48:42.965882927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:42.967404 containerd[1462]: time="2025-05-17T03:48:42.967339019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 03:48:42.968688 containerd[1462]: time="2025-05-17T03:48:42.968582701Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:42.971414 containerd[1462]: time="2025-05-17T03:48:42.971337323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:42.973303 containerd[1462]: time="2025-05-17T03:48:42.972115019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 5.012850422s" May 17 03:48:42.973303 containerd[1462]: time="2025-05-17T03:48:42.972169236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 03:48:42.974122 containerd[1462]: time="2025-05-17T03:48:42.974092258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 03:48:43.045252 containerd[1462]: time="2025-05-17T03:48:43.044561130Z" level=info msg="CreateContainer within sandbox \"36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 03:48:43.206967 containerd[1462]: time="2025-05-17T03:48:43.206875788Z" level=info msg="CreateContainer within sandbox \"36fc035cbe3c43b2a1484e60fad8bd399b3bed0d3401dadb2e49565617f28317\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"166966b8bd28bd77a3bfc415465837987306c4b36dc43f7146943bf3de48a8b1\"" May 17 03:48:43.209267 containerd[1462]: time="2025-05-17T03:48:43.209157536Z" level=info msg="StartContainer for \"166966b8bd28bd77a3bfc415465837987306c4b36dc43f7146943bf3de48a8b1\"" May 17 03:48:43.383390 systemd[1]: Started cri-containerd-166966b8bd28bd77a3bfc415465837987306c4b36dc43f7146943bf3de48a8b1.scope - libcontainer container 166966b8bd28bd77a3bfc415465837987306c4b36dc43f7146943bf3de48a8b1. May 17 03:48:43.445801 containerd[1462]: time="2025-05-17T03:48:43.445621795Z" level=info msg="StartContainer for \"166966b8bd28bd77a3bfc415465837987306c4b36dc43f7146943bf3de48a8b1\" returns successfully" May 17 03:48:43.992817 kubelet[2567]: I0517 03:48:43.992575 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bbf4dcdfc-vlsx4" podStartSLOduration=32.866835148 podStartE2EDuration="43.992533054s" podCreationTimestamp="2025-05-17 03:48:00 +0000 UTC" firstStartedPulling="2025-05-17 03:48:31.848124075 +0000 UTC m=+51.813709359" lastFinishedPulling="2025-05-17 03:48:42.973821991 +0000 UTC m=+62.939407265" observedRunningTime="2025-05-17 03:48:43.987014703 +0000 UTC m=+63.952600037" watchObservedRunningTime="2025-05-17 03:48:43.992533054 +0000 UTC m=+63.958118378" May 17 03:48:44.003153 systemd[1]: run-containerd-runc-k8s.io-166966b8bd28bd77a3bfc415465837987306c4b36dc43f7146943bf3de48a8b1-runc.kROV7F.mount: Deactivated successfully. May 17 03:48:45.720349 containerd[1462]: time="2025-05-17T03:48:45.720017765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:45.722082 containerd[1462]: time="2025-05-17T03:48:45.721508973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 03:48:45.723378 containerd[1462]: time="2025-05-17T03:48:45.723319148Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:45.726367 containerd[1462]: time="2025-05-17T03:48:45.726328642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 03:48:45.727410 containerd[1462]: time="2025-05-17T03:48:45.727216735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.752535135s" May 17 03:48:45.727410 containerd[1462]: time="2025-05-17T03:48:45.727264719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 03:48:45.729184 containerd[1462]: time="2025-05-17T03:48:45.728691374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 03:48:45.740418 containerd[1462]: time="2025-05-17T03:48:45.740364498Z" level=info msg="CreateContainer within sandbox \"e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 03:48:45.771204 containerd[1462]: time="2025-05-17T03:48:45.771130152Z" level=info msg="CreateContainer within sandbox \"e498f5cb3879f88faa660c9444d78e8d500d888c7df37d81fb77041280446389\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d175e0b8bb798b702ecb447afc48f0faf541f55f0d12a4d4c857eb819ca1b1f5\"" May 17 03:48:45.772577 containerd[1462]: time="2025-05-17T03:48:45.771950347Z" level=info msg="StartContainer for \"d175e0b8bb798b702ecb447afc48f0faf541f55f0d12a4d4c857eb819ca1b1f5\"" May 17 03:48:45.824419 systemd[1]: Started cri-containerd-d175e0b8bb798b702ecb447afc48f0faf541f55f0d12a4d4c857eb819ca1b1f5.scope - libcontainer container d175e0b8bb798b702ecb447afc48f0faf541f55f0d12a4d4c857eb819ca1b1f5. May 17 03:48:45.868955 containerd[1462]: time="2025-05-17T03:48:45.868896108Z" level=info msg="StartContainer for \"d175e0b8bb798b702ecb447afc48f0faf541f55f0d12a4d4c857eb819ca1b1f5\" returns successfully" May 17 03:48:45.973025 kubelet[2567]: I0517 03:48:45.970536 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kw9vx" podStartSLOduration=31.217771736 podStartE2EDuration="45.970511177s" podCreationTimestamp="2025-05-17 03:48:00 +0000 UTC" firstStartedPulling="2025-05-17 03:48:30.97573313 +0000 UTC m=+50.941318404" lastFinishedPulling="2025-05-17 03:48:45.728472561 +0000 UTC m=+65.694057845" observedRunningTime="2025-05-17 03:48:45.970121224 +0000 UTC m=+65.935706508" watchObservedRunningTime="2025-05-17 03:48:45.970511177 +0000 UTC m=+65.936096451" May 17 03:48:46.097113 containerd[1462]: time="2025-05-17T03:48:46.096944806Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:48:46.098703 containerd[1462]: time="2025-05-17T03:48:46.098623232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:48:46.098890 containerd[1462]: time="2025-05-17T03:48:46.098735510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 03:48:46.099074 kubelet[2567]: E0517 03:48:46.098946 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:48:46.099074 kubelet[2567]: E0517 03:48:46.099054 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:48:46.099901 containerd[1462]: time="2025-05-17T03:48:46.099392884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 03:48:46.101950 kubelet[2567]: E0517 03:48:46.100640 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d884af590bea4bba8c65a41c6bf35a3a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:48:46.383410 kubelet[2567]: I0517 03:48:46.382903 2567 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 03:48:46.385434 kubelet[2567]: I0517 03:48:46.385400 2567 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 03:48:46.454185 containerd[1462]: time="2025-05-17T03:48:46.453823622Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:48:46.456328 containerd[1462]: time="2025-05-17T03:48:46.456169572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:48:46.456538 containerd[1462]: time="2025-05-17T03:48:46.456238543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 03:48:46.456871 kubelet[2567]: E0517 03:48:46.456774 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:48:46.457019 kubelet[2567]: E0517 03:48:46.456906 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:48:46.457931 kubelet[2567]: E0517 03:48:46.457477 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccdp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gbf9j_calico-system(5d31f0bb-0747-4e8f-868a-d7b2d8faa68d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:48:46.458259 containerd[1462]: time="2025-05-17T03:48:46.457639381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 03:48:46.459690 kubelet[2567]: E0517 03:48:46.459479 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:48:46.812982 containerd[1462]: time="2025-05-17T03:48:46.812562696Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:48:46.815173 containerd[1462]: time="2025-05-17T03:48:46.814913483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:48:46.815173 containerd[1462]: time="2025-05-17T03:48:46.815001297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 03:48:46.815579 kubelet[2567]: E0517 03:48:46.815406 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:48:46.815913 kubelet[2567]: E0517 03:48:46.815667 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:48:46.816089 kubelet[2567]: E0517 03:48:46.815939 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:48:46.817764 kubelet[2567]: E0517 03:48:46.817609 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:48:54.467881 kubelet[2567]: I0517 03:48:54.467226 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 03:48:59.152743 kubelet[2567]: E0517 03:48:59.152587 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:49:02.160445 kubelet[2567]: E0517 03:49:02.160305 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:49:11.154296 containerd[1462]: time="2025-05-17T03:49:11.153418891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 03:49:11.497694 containerd[1462]: time="2025-05-17T03:49:11.497434096Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:49:11.499397 containerd[1462]: time="2025-05-17T03:49:11.499215303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:49:11.501473 kubelet[2567]: E0517 03:49:11.499778 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:49:11.501473 kubelet[2567]: E0517 03:49:11.500062 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:49:11.501473 kubelet[2567]: E0517 03:49:11.500748 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccdp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gbf9j_calico-system(5d31f0bb-0747-4e8f-868a-d7b2d8faa68d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:49:11.502259 kubelet[2567]: E0517 03:49:11.502079 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:49:11.502398 containerd[1462]: time="2025-05-17T03:49:11.499353091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 03:49:11.570480 kubelet[2567]: I0517 03:49:11.570424 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 03:49:13.997300 systemd[1]: run-containerd-runc-k8s.io-166966b8bd28bd77a3bfc415465837987306c4b36dc43f7146943bf3de48a8b1-runc.YI7M0F.mount: Deactivated successfully. May 17 03:49:17.155405 containerd[1462]: time="2025-05-17T03:49:17.154995073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 03:49:17.535813 containerd[1462]: time="2025-05-17T03:49:17.535455060Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:49:17.538716 containerd[1462]: time="2025-05-17T03:49:17.538605865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:49:17.541299 containerd[1462]: time="2025-05-17T03:49:17.538933223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 03:49:17.541459 kubelet[2567]: E0517 03:49:17.539493 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:49:17.541459 kubelet[2567]: E0517 03:49:17.539628 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:49:17.541459 kubelet[2567]: E0517 03:49:17.539939 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d884af590bea4bba8c65a41c6bf35a3a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:49:17.544781 containerd[1462]: time="2025-05-17T03:49:17.544413721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 03:49:17.882803 containerd[1462]: time="2025-05-17T03:49:17.882655702Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:49:17.885098 containerd[1462]: time="2025-05-17T03:49:17.884879886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:49:17.885098 containerd[1462]: time="2025-05-17T03:49:17.884956861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 03:49:17.885661 kubelet[2567]: E0517 03:49:17.885132 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:49:17.885661 kubelet[2567]: E0517 03:49:17.885230 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:49:17.886855 kubelet[2567]: E0517 03:49:17.886736 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:49:17.888042 kubelet[2567]: E0517 03:49:17.887984 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:49:23.151278 kubelet[2567]: E0517 03:49:23.150973 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:49:29.158234 kubelet[2567]: E0517 03:49:29.155814 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:49:35.151998 kubelet[2567]: E0517 03:49:35.151562 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:49:44.157611 kubelet[2567]: E0517 03:49:44.156488 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:49:49.153528 kubelet[2567]: E0517 03:49:49.152496 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:49:56.155794 kubelet[2567]: E0517 03:49:56.155591 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:50:02.155943 containerd[1462]: time="2025-05-17T03:50:02.155705849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 03:50:02.530772 containerd[1462]: time="2025-05-17T03:50:02.530602656Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:50:02.533182 containerd[1462]: time="2025-05-17T03:50:02.532833598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:50:02.533182 containerd[1462]: time="2025-05-17T03:50:02.532933713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 03:50:02.533898 kubelet[2567]: E0517 03:50:02.533579 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:50:02.533898 kubelet[2567]: E0517 03:50:02.533883 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:50:02.536773 kubelet[2567]: E0517 03:50:02.534606 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccdp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gbf9j_calico-system(5d31f0bb-0747-4e8f-868a-d7b2d8faa68d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:50:02.536773 kubelet[2567]: E0517 03:50:02.536169 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:50:10.156148 containerd[1462]: time="2025-05-17T03:50:10.155717791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 03:50:10.513556 containerd[1462]: time="2025-05-17T03:50:10.513429905Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:50:10.515608 containerd[1462]: time="2025-05-17T03:50:10.515449782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:50:10.515894 containerd[1462]: time="2025-05-17T03:50:10.515520712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 03:50:10.517347 kubelet[2567]: E0517 03:50:10.516330 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:50:10.517347 kubelet[2567]: E0517 03:50:10.516506 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:50:10.517347 kubelet[2567]: E0517 03:50:10.516825 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d884af590bea4bba8c65a41c6bf35a3a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:50:10.522928 containerd[1462]: time="2025-05-17T03:50:10.522271379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 03:50:10.889099 containerd[1462]: time="2025-05-17T03:50:10.888573927Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:50:10.891278 containerd[1462]: time="2025-05-17T03:50:10.891152638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:50:10.891538 containerd[1462]: time="2025-05-17T03:50:10.891394991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 03:50:10.891796 kubelet[2567]: E0517 03:50:10.891721 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:50:10.892020 kubelet[2567]: E0517 03:50:10.891819 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:50:10.892878 kubelet[2567]: E0517 03:50:10.892102 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:50:10.894078 kubelet[2567]: E0517 03:50:10.893948 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:50:14.153573 kubelet[2567]: E0517 03:50:14.153478 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:50:19.621503 update_engine[1446]: I20250517 03:50:19.620889 1446 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 03:50:19.621503 update_engine[1446]: I20250517 03:50:19.621536 1446 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 03:50:19.624298 update_engine[1446]: I20250517 03:50:19.624126 1446 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 03:50:19.629118 update_engine[1446]: I20250517 03:50:19.628098 1446 omaha_request_params.cc:62] Current group set to lts May 17 03:50:19.636669 update_engine[1446]: I20250517 03:50:19.636533 1446 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 03:50:19.636669 update_engine[1446]: I20250517 03:50:19.636636 1446 update_attempter.cc:643] Scheduling an action processor start. May 17 03:50:19.637509 update_engine[1446]: I20250517 03:50:19.636757 1446 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 03:50:19.637509 update_engine[1446]: I20250517 03:50:19.637071 1446 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 03:50:19.637834 update_engine[1446]: I20250517 03:50:19.637605 1446 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 03:50:19.637834 update_engine[1446]: I20250517 03:50:19.637655 1446 omaha_request_action.cc:272] Request: May 17 03:50:19.637834 update_engine[1446]: May 17 03:50:19.637834 update_engine[1446]: May 17 03:50:19.637834 update_engine[1446]: May 17 03:50:19.637834 update_engine[1446]: May 17 03:50:19.637834 update_engine[1446]: May 17 03:50:19.637834 update_engine[1446]: May 17 03:50:19.637834 update_engine[1446]: May 17 03:50:19.637834 update_engine[1446]: May 17 03:50:19.637834 update_engine[1446]: I20250517 03:50:19.637730 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 03:50:19.645438 locksmithd[1478]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 03:50:19.653994 update_engine[1446]: I20250517 03:50:19.653862 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 03:50:19.655603 update_engine[1446]: I20250517 03:50:19.655453 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 03:50:19.667904 update_engine[1446]: E20250517 03:50:19.667752 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 03:50:19.668131 update_engine[1446]: I20250517 03:50:19.668020 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 03:50:21.153191 kubelet[2567]: E0517 03:50:21.152998 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:50:28.154247 kubelet[2567]: E0517 03:50:28.153361 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:50:29.529929 update_engine[1446]: I20250517 03:50:29.529682 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 03:50:29.530894 update_engine[1446]: I20250517 03:50:29.530189 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 03:50:29.531009 update_engine[1446]: I20250517 03:50:29.530894 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 03:50:29.541389 update_engine[1446]: E20250517 03:50:29.541281 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 03:50:29.541559 update_engine[1446]: I20250517 03:50:29.541409 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 03:50:34.162641 kubelet[2567]: E0517 03:50:34.159863 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:50:39.534536 update_engine[1446]: I20250517 03:50:39.534307 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 03:50:39.535626 update_engine[1446]: I20250517 03:50:39.534793 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 03:50:39.535626 update_engine[1446]: I20250517 03:50:39.535565 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 03:50:39.546058 update_engine[1446]: E20250517 03:50:39.545944 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 03:50:39.546282 update_engine[1446]: I20250517 03:50:39.546073 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 03:50:40.164816 kubelet[2567]: E0517 03:50:40.164688 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:50:46.153899 kubelet[2567]: E0517 03:50:46.153726 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:50:49.532386 update_engine[1446]: I20250517 03:50:49.532104 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 03:50:49.533164 update_engine[1446]: I20250517 03:50:49.532714 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 03:50:49.533340 update_engine[1446]: I20250517 03:50:49.533164 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 03:50:49.544361 update_engine[1446]: E20250517 03:50:49.544153 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 03:50:49.544634 update_engine[1446]: I20250517 03:50:49.544360 1446 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 03:50:49.544634 update_engine[1446]: I20250517 03:50:49.544417 1446 omaha_request_action.cc:617] Omaha request response: May 17 03:50:49.544634 update_engine[1446]: E20250517 03:50:49.544607 1446 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 03:50:49.544858 update_engine[1446]: I20250517 03:50:49.544687 1446 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 03:50:49.544858 update_engine[1446]: I20250517 03:50:49.544705 1446 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 03:50:49.544858 update_engine[1446]: I20250517 03:50:49.544718 1446 update_attempter.cc:306] Processing Done. May 17 03:50:49.544858 update_engine[1446]: E20250517 03:50:49.544770 1446 update_attempter.cc:619] Update failed. May 17 03:50:49.544858 update_engine[1446]: I20250517 03:50:49.544796 1446 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 03:50:49.544858 update_engine[1446]: I20250517 03:50:49.544811 1446 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 03:50:49.544858 update_engine[1446]: I20250517 03:50:49.544829 1446 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 03:50:49.545581 update_engine[1446]: I20250517 03:50:49.544981 1446 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 03:50:49.545581 update_engine[1446]: I20250517 03:50:49.545032 1446 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 03:50:49.545581 update_engine[1446]: I20250517 03:50:49.545047 1446 omaha_request_action.cc:272] Request: May 17 03:50:49.545581 update_engine[1446]: May 17 03:50:49.545581 update_engine[1446]: May 17 03:50:49.545581 update_engine[1446]: May 17 03:50:49.545581 update_engine[1446]: May 17 03:50:49.545581 update_engine[1446]: May 17 03:50:49.545581 update_engine[1446]: May 17 03:50:49.545581 update_engine[1446]: I20250517 03:50:49.545061 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 03:50:49.545581 update_engine[1446]: I20250517 03:50:49.545413 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 03:50:49.546610 update_engine[1446]: I20250517 03:50:49.545774 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 03:50:49.546734 locksmithd[1478]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 03:50:49.556147 update_engine[1446]: E20250517 03:50:49.556042 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 03:50:49.556369 update_engine[1446]: I20250517 03:50:49.556162 1446 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 03:50:49.556369 update_engine[1446]: I20250517 03:50:49.556186 1446 omaha_request_action.cc:617] Omaha request response: May 17 03:50:49.556369 update_engine[1446]: I20250517 03:50:49.556284 1446 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 03:50:49.556369 update_engine[1446]: I20250517 03:50:49.556303 1446 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 03:50:49.556369 update_engine[1446]: I20250517 03:50:49.556314 1446 update_attempter.cc:306] Processing Done. May 17 03:50:49.556369 update_engine[1446]: I20250517 03:50:49.556329 1446 update_attempter.cc:310] Error event sent. May 17 03:50:49.556933 update_engine[1446]: I20250517 03:50:49.556377 1446 update_check_scheduler.cc:74] Next update check in 41m2s May 17 03:50:49.557492 locksmithd[1478]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 03:50:53.151282 kubelet[2567]: E0517 03:50:53.151035 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:50:58.159083 kubelet[2567]: E0517 03:50:58.157632 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:51:04.152849 kubelet[2567]: E0517 03:51:04.152462 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:51:13.154521 kubelet[2567]: E0517 03:51:13.154118 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:51:16.151697 kubelet[2567]: E0517 03:51:16.151530 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:51:26.160631 kubelet[2567]: E0517 03:51:26.160069 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:51:28.157853 containerd[1462]: time="2025-05-17T03:51:28.156753833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 03:51:28.550369 containerd[1462]: time="2025-05-17T03:51:28.550190915Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:51:28.552963 containerd[1462]: time="2025-05-17T03:51:28.552614988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 03:51:28.552963 containerd[1462]: time="2025-05-17T03:51:28.552620478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:51:28.553874 kubelet[2567]: E0517 03:51:28.553652 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:51:28.557488 kubelet[2567]: E0517 03:51:28.553949 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:51:28.557488 kubelet[2567]: E0517 03:51:28.554951 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccdp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gbf9j_calico-system(5d31f0bb-0747-4e8f-868a-d7b2d8faa68d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:51:28.558660 kubelet[2567]: E0517 03:51:28.558456 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:51:40.168437 kubelet[2567]: E0517 03:51:40.166424 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:51:41.154858 containerd[1462]: time="2025-05-17T03:51:41.154103698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 03:51:41.524316 containerd[1462]: time="2025-05-17T03:51:41.523862792Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:51:41.526545 containerd[1462]: time="2025-05-17T03:51:41.526409049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:51:41.526859 containerd[1462]: time="2025-05-17T03:51:41.526738143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 03:51:41.527563 kubelet[2567]: E0517 03:51:41.527382 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:51:41.528487 kubelet[2567]: E0517 03:51:41.527601 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:51:41.528487 kubelet[2567]: E0517 03:51:41.528313 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d884af590bea4bba8c65a41c6bf35a3a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:51:41.532733 containerd[1462]: time="2025-05-17T03:51:41.532674722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 03:51:41.911292 containerd[1462]: time="2025-05-17T03:51:41.910988084Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:51:41.913149 containerd[1462]: time="2025-05-17T03:51:41.913041060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:51:41.913354 containerd[1462]: time="2025-05-17T03:51:41.913281509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 03:51:41.914132 kubelet[2567]: E0517 03:51:41.913986 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:51:41.914498 kubelet[2567]: E0517 03:51:41.914150 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:51:41.915013 kubelet[2567]: E0517 03:51:41.914848 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:51:41.916404 kubelet[2567]: E0517 03:51:41.916324 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:51:53.156156 kubelet[2567]: E0517 03:51:53.155483 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:51:57.154770 kubelet[2567]: E0517 03:51:57.154602 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:52:08.157498 kubelet[2567]: E0517 03:52:08.156948 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:52:08.162082 kubelet[2567]: E0517 03:52:08.161953 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:52:20.167054 kubelet[2567]: E0517 03:52:20.166626 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:52:23.153003 kubelet[2567]: E0517 03:52:23.152468 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:52:28.773130 systemd[1]: run-containerd-runc-k8s.io-4478d9ad2989a7801120da2e23fcb3dbda37a5c427118e76ff0db9eee3d4549c-runc.1uYEE8.mount: Deactivated successfully. May 17 03:52:32.160791 kubelet[2567]: E0517 03:52:32.160190 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:52:35.154588 kubelet[2567]: E0517 03:52:35.154300 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:52:47.153779 kubelet[2567]: E0517 03:52:47.153237 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:52:47.158535 kubelet[2567]: E0517 03:52:47.158310 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:52:59.157832 kubelet[2567]: E0517 03:52:59.155907 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:53:01.154184 kubelet[2567]: E0517 03:53:01.152182 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:53:12.159463 kubelet[2567]: E0517 03:53:12.156359 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:53:13.741530 systemd[1]: Started sshd@9-172.24.4.46:22-172.24.4.1:42672.service - OpenSSH per-connection server daemon (172.24.4.1:42672). May 17 03:53:14.951597 sshd[5994]: Accepted publickey for core from 172.24.4.1 port 42672 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:53:14.959395 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:53:14.979657 systemd-logind[1443]: New session 12 of user core. May 17 03:53:14.987630 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 03:53:15.758850 sshd[5994]: pam_unix(sshd:session): session closed for user core May 17 03:53:15.763139 systemd[1]: sshd@9-172.24.4.46:22-172.24.4.1:42672.service: Deactivated successfully. May 17 03:53:15.766372 systemd[1]: session-12.scope: Deactivated successfully. May 17 03:53:15.769794 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. May 17 03:53:15.771771 systemd-logind[1443]: Removed session 12. May 17 03:53:16.158338 kubelet[2567]: E0517 03:53:16.156852 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:53:20.799677 systemd[1]: Started sshd@10-172.24.4.46:22-172.24.4.1:42686.service - OpenSSH per-connection server daemon (172.24.4.1:42686). May 17 03:53:22.005995 sshd[6050]: Accepted publickey for core from 172.24.4.1 port 42686 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:53:22.011609 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:53:22.021086 systemd-logind[1443]: New session 13 of user core. May 17 03:53:22.029977 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 03:53:22.907870 sshd[6050]: pam_unix(sshd:session): session closed for user core May 17 03:53:22.913373 systemd[1]: sshd@10-172.24.4.46:22-172.24.4.1:42686.service: Deactivated successfully. May 17 03:53:22.917368 systemd[1]: session-13.scope: Deactivated successfully. May 17 03:53:22.918546 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. May 17 03:53:22.919672 systemd-logind[1443]: Removed session 13. May 17 03:53:25.152797 kubelet[2567]: E0517 03:53:25.152443 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:53:27.947029 systemd[1]: Started sshd@11-172.24.4.46:22-172.24.4.1:49720.service - OpenSSH per-connection server daemon (172.24.4.1:49720). May 17 03:53:28.156801 kubelet[2567]: E0517 03:53:28.156501 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:53:29.258816 sshd[6066]: Accepted publickey for core from 172.24.4.1 port 49720 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:53:29.263098 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:53:29.283362 systemd-logind[1443]: New session 14 of user core. May 17 03:53:29.297672 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 03:53:30.060953 sshd[6066]: pam_unix(sshd:session): session closed for user core May 17 03:53:30.068831 systemd[1]: sshd@11-172.24.4.46:22-172.24.4.1:49720.service: Deactivated successfully. May 17 03:53:30.072101 systemd[1]: session-14.scope: Deactivated successfully. May 17 03:53:30.074289 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. May 17 03:53:30.081525 systemd[1]: Started sshd@12-172.24.4.46:22-172.24.4.1:49728.service - OpenSSH per-connection server daemon (172.24.4.1:49728). May 17 03:53:30.083881 systemd-logind[1443]: Removed session 14. May 17 03:53:31.249852 sshd[6101]: Accepted publickey for core from 172.24.4.1 port 49728 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:53:31.253064 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:53:31.269989 systemd-logind[1443]: New session 15 of user core. May 17 03:53:31.275578 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 03:53:32.107682 sshd[6101]: pam_unix(sshd:session): session closed for user core May 17 03:53:32.121882 systemd[1]: sshd@12-172.24.4.46:22-172.24.4.1:49728.service: Deactivated successfully. May 17 03:53:32.128482 systemd[1]: session-15.scope: Deactivated successfully. May 17 03:53:32.132169 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. May 17 03:53:32.143115 systemd[1]: Started sshd@13-172.24.4.46:22-172.24.4.1:49732.service - OpenSSH per-connection server daemon (172.24.4.1:49732). May 17 03:53:32.146161 systemd-logind[1443]: Removed session 15. May 17 03:53:33.258877 sshd[6112]: Accepted publickey for core from 172.24.4.1 port 49732 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:53:33.275311 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:53:33.286439 systemd-logind[1443]: New session 16 of user core. May 17 03:53:33.297578 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 03:53:34.348941 sshd[6112]: pam_unix(sshd:session): session closed for user core May 17 03:53:34.360787 systemd[1]: sshd@13-172.24.4.46:22-172.24.4.1:49732.service: Deactivated successfully. May 17 03:53:34.365904 systemd[1]: session-16.scope: Deactivated successfully. May 17 03:53:34.371866 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. May 17 03:53:34.375934 systemd-logind[1443]: Removed session 16. May 17 03:53:39.152500 kubelet[2567]: E0517 03:53:39.152241 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:53:39.373731 systemd[1]: Started sshd@14-172.24.4.46:22-172.24.4.1:51798.service - OpenSSH per-connection server daemon (172.24.4.1:51798). May 17 03:53:40.158578 kubelet[2567]: E0517 03:53:40.158459 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:53:40.568352 sshd[6149]: Accepted publickey for core from 172.24.4.1 port 51798 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:53:40.570027 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:53:40.578179 systemd-logind[1443]: New session 17 of user core. May 17 03:53:40.582347 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 03:53:41.327410 sshd[6149]: pam_unix(sshd:session): session closed for user core May 17 03:53:41.331031 systemd[1]: sshd@14-172.24.4.46:22-172.24.4.1:51798.service: Deactivated successfully. May 17 03:53:41.333801 systemd[1]: session-17.scope: Deactivated successfully. May 17 03:53:41.336544 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. May 17 03:53:41.338595 systemd-logind[1443]: Removed session 17. May 17 03:53:46.352781 systemd[1]: Started sshd@15-172.24.4.46:22-172.24.4.1:42766.service - OpenSSH per-connection server daemon (172.24.4.1:42766). May 17 03:53:47.351307 sshd[6185]: Accepted publickey for core from 172.24.4.1 port 42766 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:53:47.354585 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:53:47.368696 systemd-logind[1443]: New session 18 of user core. May 17 03:53:47.374538 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 03:53:48.081232 sshd[6185]: pam_unix(sshd:session): session closed for user core May 17 03:53:48.084884 systemd[1]: sshd@15-172.24.4.46:22-172.24.4.1:42766.service: Deactivated successfully. May 17 03:53:48.090723 systemd[1]: session-18.scope: Deactivated successfully. May 17 03:53:48.093825 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. May 17 03:53:48.095280 systemd-logind[1443]: Removed session 18. May 17 03:53:52.153583 kubelet[2567]: E0517 03:53:52.153515 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:53:53.105544 systemd[1]: Started sshd@16-172.24.4.46:22-172.24.4.1:42770.service - OpenSSH per-connection server daemon (172.24.4.1:42770). May 17 03:53:54.067331 sshd[6206]: Accepted publickey for core from 172.24.4.1 port 42770 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:53:54.069010 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:53:54.079465 systemd-logind[1443]: New session 19 of user core. May 17 03:53:54.088680 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 03:53:54.158108 kubelet[2567]: E0517 03:53:54.158015 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:53:55.006636 sshd[6206]: pam_unix(sshd:session): session closed for user core May 17 03:53:55.019647 systemd[1]: sshd@16-172.24.4.46:22-172.24.4.1:42770.service: Deactivated successfully. May 17 03:53:55.025469 systemd[1]: session-19.scope: Deactivated successfully. May 17 03:53:55.029747 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. May 17 03:53:55.038967 systemd[1]: Started sshd@17-172.24.4.46:22-172.24.4.1:37304.service - OpenSSH per-connection server daemon (172.24.4.1:37304). May 17 03:53:55.045127 systemd-logind[1443]: Removed session 19. May 17 03:53:56.247874 sshd[6219]: Accepted publickey for core from 172.24.4.1 port 37304 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:53:56.251834 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:53:56.265482 systemd-logind[1443]: New session 20 of user core. May 17 03:53:56.273077 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 03:53:57.636104 sshd[6219]: pam_unix(sshd:session): session closed for user core May 17 03:53:57.649093 systemd[1]: sshd@17-172.24.4.46:22-172.24.4.1:37304.service: Deactivated successfully. May 17 03:53:57.656513 systemd[1]: session-20.scope: Deactivated successfully. May 17 03:53:57.659451 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. May 17 03:53:57.670979 systemd[1]: Started sshd@18-172.24.4.46:22-172.24.4.1:37320.service - OpenSSH per-connection server daemon (172.24.4.1:37320). May 17 03:53:57.674801 systemd-logind[1443]: Removed session 20. May 17 03:53:58.875930 sshd[6231]: Accepted publickey for core from 172.24.4.1 port 37320 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:53:58.876769 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:53:58.881482 systemd-logind[1443]: New session 21 of user core. May 17 03:53:58.887384 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 03:54:00.695438 sshd[6231]: pam_unix(sshd:session): session closed for user core May 17 03:54:00.704386 systemd[1]: sshd@18-172.24.4.46:22-172.24.4.1:37320.service: Deactivated successfully. May 17 03:54:00.706804 systemd[1]: session-21.scope: Deactivated successfully. May 17 03:54:00.707748 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. May 17 03:54:00.716564 systemd[1]: Started sshd@19-172.24.4.46:22-172.24.4.1:37330.service - OpenSSH per-connection server daemon (172.24.4.1:37330). May 17 03:54:00.718492 systemd-logind[1443]: Removed session 21. May 17 03:54:01.998666 sshd[6274]: Accepted publickey for core from 172.24.4.1 port 37330 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:54:02.003693 sshd[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:54:02.017781 systemd-logind[1443]: New session 22 of user core. May 17 03:54:02.026637 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 03:54:03.062695 sshd[6274]: pam_unix(sshd:session): session closed for user core May 17 03:54:03.077552 systemd[1]: sshd@19-172.24.4.46:22-172.24.4.1:37330.service: Deactivated successfully. May 17 03:54:03.083508 systemd[1]: session-22.scope: Deactivated successfully. May 17 03:54:03.085493 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. May 17 03:54:03.095955 systemd[1]: Started sshd@20-172.24.4.46:22-172.24.4.1:37338.service - OpenSSH per-connection server daemon (172.24.4.1:37338). May 17 03:54:03.099102 systemd-logind[1443]: Removed session 22. May 17 03:54:04.156315 kubelet[2567]: E0517 03:54:04.156057 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:54:04.250897 sshd[6285]: Accepted publickey for core from 172.24.4.1 port 37338 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:54:04.255392 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:54:04.268051 systemd-logind[1443]: New session 23 of user core. May 17 03:54:04.278582 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 03:54:04.850075 sshd[6285]: pam_unix(sshd:session): session closed for user core May 17 03:54:04.860544 systemd[1]: sshd@20-172.24.4.46:22-172.24.4.1:37338.service: Deactivated successfully. May 17 03:54:04.869096 systemd[1]: session-23.scope: Deactivated successfully. May 17 03:54:04.871410 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. May 17 03:54:04.874069 systemd-logind[1443]: Removed session 23. May 17 03:54:07.154524 kubelet[2567]: E0517 03:54:07.153829 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:54:09.871486 systemd[1]: Started sshd@21-172.24.4.46:22-172.24.4.1:44418.service - OpenSSH per-connection server daemon (172.24.4.1:44418). May 17 03:54:11.130349 sshd[6300]: Accepted publickey for core from 172.24.4.1 port 44418 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:54:11.133642 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:54:11.145431 systemd-logind[1443]: New session 24 of user core. May 17 03:54:11.155586 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 03:54:11.930905 sshd[6300]: pam_unix(sshd:session): session closed for user core May 17 03:54:11.937067 systemd[1]: sshd@21-172.24.4.46:22-172.24.4.1:44418.service: Deactivated successfully. May 17 03:54:11.940069 systemd[1]: session-24.scope: Deactivated successfully. May 17 03:54:11.942357 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. May 17 03:54:11.944173 systemd-logind[1443]: Removed session 24. May 17 03:54:16.949895 systemd[1]: Started sshd@22-172.24.4.46:22-172.24.4.1:59994.service - OpenSSH per-connection server daemon (172.24.4.1:59994). May 17 03:54:18.123277 sshd[6334]: Accepted publickey for core from 172.24.4.1 port 59994 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:54:18.125176 sshd[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:54:18.134165 systemd-logind[1443]: New session 25 of user core. May 17 03:54:18.138373 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 03:54:18.843351 sshd[6334]: pam_unix(sshd:session): session closed for user core May 17 03:54:18.846809 systemd[1]: sshd@22-172.24.4.46:22-172.24.4.1:59994.service: Deactivated successfully. May 17 03:54:18.849598 systemd[1]: session-25.scope: Deactivated successfully. May 17 03:54:18.851929 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. May 17 03:54:18.853553 systemd-logind[1443]: Removed session 25. May 17 03:54:19.154263 containerd[1462]: time="2025-05-17T03:54:19.154051331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 03:54:19.514114 containerd[1462]: time="2025-05-17T03:54:19.514043374Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:54:19.516019 containerd[1462]: time="2025-05-17T03:54:19.515815043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:54:19.516019 containerd[1462]: time="2025-05-17T03:54:19.515892173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 03:54:19.516508 kubelet[2567]: E0517 03:54:19.516421 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:54:19.517569 kubelet[2567]: E0517 03:54:19.516555 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 03:54:19.517618 kubelet[2567]: E0517 03:54:19.517473 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccdp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-gbf9j_calico-system(5d31f0bb-0747-4e8f-868a-d7b2d8faa68d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:54:19.518919 kubelet[2567]: E0517 03:54:19.518851 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:54:22.160658 containerd[1462]: time="2025-05-17T03:54:22.160574653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 03:54:22.519029 containerd[1462]: time="2025-05-17T03:54:22.518926964Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:54:22.520699 containerd[1462]: time="2025-05-17T03:54:22.520648777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:54:22.522475 containerd[1462]: time="2025-05-17T03:54:22.520769190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 03:54:22.522698 kubelet[2567]: E0517 03:54:22.521087 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:54:22.522698 kubelet[2567]: E0517 03:54:22.521256 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 03:54:22.522698 kubelet[2567]: E0517 03:54:22.521587 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d884af590bea4bba8c65a41c6bf35a3a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:54:22.527297 containerd[1462]: time="2025-05-17T03:54:22.527175088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 03:54:22.863893 containerd[1462]: time="2025-05-17T03:54:22.863469300Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 03:54:22.866042 containerd[1462]: time="2025-05-17T03:54:22.865815274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 03:54:22.866300 containerd[1462]: time="2025-05-17T03:54:22.865864861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 03:54:22.866437 kubelet[2567]: E0517 03:54:22.866356 2567 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:54:22.866593 kubelet[2567]: E0517 03:54:22.866449 2567 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 03:54:22.866713 kubelet[2567]: E0517 03:54:22.866616 2567 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d56654d85-kd7gz_calico-system(2a3cbd78-bd6f-48be-a6de-d94293efa7ac): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 03:54:22.868152 kubelet[2567]: E0517 03:54:22.868085 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:54:23.857623 systemd[1]: Started sshd@23-172.24.4.46:22-172.24.4.1:50048.service - OpenSSH per-connection server daemon (172.24.4.1:50048). May 17 03:54:25.126824 sshd[6347]: Accepted publickey for core from 172.24.4.1 port 50048 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:54:25.128938 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:54:25.137359 systemd-logind[1443]: New session 26 of user core. May 17 03:54:25.142360 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 03:54:25.889325 sshd[6347]: pam_unix(sshd:session): session closed for user core May 17 03:54:25.899653 systemd[1]: sshd@23-172.24.4.46:22-172.24.4.1:50048.service: Deactivated successfully. May 17 03:54:25.907743 systemd[1]: session-26.scope: Deactivated successfully. May 17 03:54:25.910087 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. May 17 03:54:25.913281 systemd-logind[1443]: Removed session 26. May 17 03:54:30.918426 systemd[1]: Started sshd@24-172.24.4.46:22-172.24.4.1:50060.service - OpenSSH per-connection server daemon (172.24.4.1:50060). May 17 03:54:32.162836 kubelet[2567]: E0517 03:54:32.160965 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:54:32.178325 sshd[6381]: Accepted publickey for core from 172.24.4.1 port 50060 ssh2: RSA SHA256:iJ5ST8WmTIeVoCteR7vtnfZaZrbGA9uLglwSiNQSKqg May 17 03:54:32.182727 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 03:54:32.205137 systemd-logind[1443]: New session 27 of user core. May 17 03:54:32.212376 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 03:54:32.995805 sshd[6381]: pam_unix(sshd:session): session closed for user core May 17 03:54:33.005052 systemd[1]: sshd@24-172.24.4.46:22-172.24.4.1:50060.service: Deactivated successfully. May 17 03:54:33.010856 systemd[1]: session-27.scope: Deactivated successfully. May 17 03:54:33.013433 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. May 17 03:54:33.016560 systemd-logind[1443]: Removed session 27. May 17 03:54:36.158303 kubelet[2567]: E0517 03:54:36.157642 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:54:47.152369 kubelet[2567]: E0517 03:54:47.151942 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:54:51.158885 kubelet[2567]: E0517 03:54:51.158587 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:54:59.151444 kubelet[2567]: E0517 03:54:59.151341 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:55:03.154494 kubelet[2567]: E0517 03:55:03.153792 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:55:14.152707 kubelet[2567]: E0517 03:55:14.152528 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:55:14.154374 kubelet[2567]: E0517 03:55:14.154150 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:55:27.152785 kubelet[2567]: E0517 03:55:27.152525 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:55:29.150676 kubelet[2567]: E0517 03:55:29.150601 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:55:41.153072 kubelet[2567]: E0517 03:55:41.152963 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:55:44.151818 kubelet[2567]: E0517 03:55:44.151693 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:55:55.154070 kubelet[2567]: E0517 03:55:55.153535 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:55:56.153763 kubelet[2567]: E0517 03:55:56.152920 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:55:58.754683 systemd[1]: run-containerd-runc-k8s.io-4478d9ad2989a7801120da2e23fcb3dbda37a5c427118e76ff0db9eee3d4549c-runc.YKXkb8.mount: Deactivated successfully. May 17 03:56:07.152066 kubelet[2567]: E0517 03:56:07.151366 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:56:10.156143 kubelet[2567]: E0517 03:56:10.155467 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:56:22.153383 kubelet[2567]: E0517 03:56:22.152985 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:56:22.155948 kubelet[2567]: E0517 03:56:22.154667 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac" May 17 03:56:34.151664 kubelet[2567]: E0517 03:56:34.151494 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-gbf9j" podUID="5d31f0bb-0747-4e8f-868a-d7b2d8faa68d" May 17 03:56:36.167928 kubelet[2567]: E0517 03:56:36.167513 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7d56654d85-kd7gz" podUID="2a3cbd78-bd6f-48be-a6de-d94293efa7ac"