Jun 20 19:48:50.952574 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:48:50.952600 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:48:50.952610 kernel: BIOS-provided physical RAM map: Jun 20 19:48:50.952620 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 20 19:48:50.952628 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 20 19:48:50.952635 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 19:48:50.952644 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jun 20 19:48:50.952652 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jun 20 19:48:50.952659 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:48:50.952667 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 19:48:50.952675 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jun 20 19:48:50.952683 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 19:48:50.952692 kernel: NX (Execute Disable) protection: active Jun 20 19:48:50.952700 kernel: APIC: Static calls initialized Jun 20 19:48:50.952709 kernel: SMBIOS 3.0.0 present. Jun 20 19:48:50.952717 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jun 20 19:48:50.952725 kernel: DMI: Memory slots populated: 1/1 Jun 20 19:48:50.952734 kernel: Hypervisor detected: KVM Jun 20 19:48:50.952742 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:48:50.952750 kernel: kvm-clock: using sched offset of 4987690849 cycles Jun 20 19:48:50.952759 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:48:50.952767 kernel: tsc: Detected 1996.249 MHz processor Jun 20 19:48:50.952776 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:48:50.952785 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:48:50.952793 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jun 20 19:48:50.952801 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 19:48:50.952811 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:48:50.952820 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jun 20 19:48:50.952828 kernel: ACPI: Early table checksum verification disabled Jun 20 19:48:50.952836 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jun 20 19:48:50.952844 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:48:50.952852 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:48:50.952861 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:48:50.952869 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jun 20 19:48:50.952877 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:48:50.952887 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:48:50.952895 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jun 20 19:48:50.952903 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jun 20 19:48:50.952911 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jun 20 19:48:50.952920 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jun 20 19:48:50.952931 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jun 20 19:48:50.952939 kernel: No NUMA configuration found Jun 20 19:48:50.952950 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jun 20 19:48:50.952958 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jun 20 19:48:50.952967 kernel: Zone ranges: Jun 20 19:48:50.952975 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:48:50.952984 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:48:50.952992 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jun 20 19:48:50.953001 kernel: Device empty Jun 20 19:48:50.953010 kernel: Movable zone start for each node Jun 20 19:48:50.953020 kernel: Early memory node ranges Jun 20 19:48:50.953028 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 19:48:50.953036 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jun 20 19:48:50.953045 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jun 20 19:48:50.953054 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jun 20 19:48:50.953062 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:48:50.953071 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 19:48:50.953080 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jun 20 19:48:50.953088 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 19:48:50.953098 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:48:50.953107 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:48:50.953115 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 19:48:50.953124 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:48:50.953133 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:48:50.953141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:48:50.953150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:48:50.953158 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:48:50.953198 kernel: CPU topo: Max. logical packages: 2 Jun 20 19:48:50.953209 kernel: CPU topo: Max. logical dies: 2 Jun 20 19:48:50.953218 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:48:50.953226 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:48:50.953235 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:48:50.953243 kernel: CPU topo: Num. threads per package: 1 Jun 20 19:48:50.953252 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:48:50.953260 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:48:50.953269 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jun 20 19:48:50.953277 kernel: Booting paravirtualized kernel on KVM Jun 20 19:48:50.953288 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:48:50.953297 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:48:50.953305 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:48:50.953314 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:48:50.953322 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:48:50.953331 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 20 19:48:50.953341 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:48:50.953350 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:48:50.953360 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:48:50.953369 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:48:50.953377 kernel: Fallback order for Node 0: 0 Jun 20 19:48:50.953386 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jun 20 19:48:50.953395 kernel: Policy zone: Normal Jun 20 19:48:50.953403 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:48:50.953412 kernel: software IO TLB: area num 2. Jun 20 19:48:50.953420 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:48:50.953429 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:48:50.953439 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:48:50.953448 kernel: Dynamic Preempt: voluntary Jun 20 19:48:50.953456 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:48:50.953466 kernel: rcu: RCU event tracing is enabled. Jun 20 19:48:50.953474 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:48:50.953483 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:48:50.953492 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:48:50.953501 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:48:50.953509 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:48:50.953518 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:48:50.953528 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:48:50.953537 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:48:50.953546 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:48:50.953554 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 19:48:50.953563 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:48:50.953571 kernel: Console: colour VGA+ 80x25 Jun 20 19:48:50.953580 kernel: printk: legacy console [tty0] enabled Jun 20 19:48:50.953589 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:48:50.953597 kernel: ACPI: Core revision 20240827 Jun 20 19:48:50.953607 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:48:50.953616 kernel: x2apic enabled Jun 20 19:48:50.953624 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:48:50.953633 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:48:50.953642 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 20 19:48:50.953656 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jun 20 19:48:50.953667 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 20 19:48:50.953676 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 20 19:48:50.953685 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:48:50.953694 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:48:50.953703 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:48:50.953714 kernel: Speculative Store Bypass: Vulnerable Jun 20 19:48:50.953723 kernel: x86/fpu: x87 FPU will use FXSAVE Jun 20 19:48:50.953732 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:48:50.953741 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:48:50.953750 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:48:50.953761 kernel: landlock: Up and running. Jun 20 19:48:50.953770 kernel: SELinux: Initializing. Jun 20 19:48:50.953779 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:48:50.953789 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:48:50.953799 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jun 20 19:48:50.953808 kernel: Performance Events: AMD PMU driver. Jun 20 19:48:50.953818 kernel: ... version: 0 Jun 20 19:48:50.953827 kernel: ... bit width: 48 Jun 20 19:48:50.953836 kernel: ... generic registers: 4 Jun 20 19:48:50.953847 kernel: ... value mask: 0000ffffffffffff Jun 20 19:48:50.953857 kernel: ... max period: 00007fffffffffff Jun 20 19:48:50.953866 kernel: ... fixed-purpose events: 0 Jun 20 19:48:50.953875 kernel: ... event mask: 000000000000000f Jun 20 19:48:50.953884 kernel: signal: max sigframe size: 1440 Jun 20 19:48:50.953894 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:48:50.953903 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:48:50.953912 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:48:50.953922 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:48:50.953933 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:48:50.953943 kernel: .... node #0, CPUs: #1 Jun 20 19:48:50.953952 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:48:50.953962 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jun 20 19:48:50.953972 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 227296K reserved, 0K cma-reserved) Jun 20 19:48:50.953981 kernel: devtmpfs: initialized Jun 20 19:48:50.953990 kernel: x86/mm: Memory block size: 128MB Jun 20 19:48:50.954000 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:48:50.954009 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:48:50.954020 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:48:50.954029 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:48:50.954038 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:48:50.954047 kernel: audit: type=2000 audit(1750448927.100:1): state=initialized audit_enabled=0 res=1 Jun 20 19:48:50.954056 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:48:50.954066 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:48:50.954075 kernel: cpuidle: using governor menu Jun 20 19:48:50.954084 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:48:50.954093 kernel: dca service started, version 1.12.1 Jun 20 19:48:50.954104 kernel: PCI: Using configuration type 1 for base access Jun 20 19:48:50.954114 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:48:50.954123 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:48:50.954132 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:48:50.954141 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:48:50.954150 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:48:50.954159 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:48:50.954216 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:48:50.954226 kernel: ACPI: Interpreter enabled Jun 20 19:48:50.954235 kernel: ACPI: PM: (supports S0 S3 S5) Jun 20 19:48:50.954247 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:48:50.954256 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:48:50.954265 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:48:50.954274 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 20 19:48:50.954284 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:48:50.954437 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:48:50.954528 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 20 19:48:50.954619 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 20 19:48:50.954633 kernel: acpiphp: Slot [3] registered Jun 20 19:48:50.954642 kernel: acpiphp: Slot [4] registered Jun 20 19:48:50.954651 kernel: acpiphp: Slot [5] registered Jun 20 19:48:50.954660 kernel: acpiphp: Slot [6] registered Jun 20 19:48:50.954669 kernel: acpiphp: Slot [7] registered Jun 20 19:48:50.954678 kernel: acpiphp: Slot [8] registered Jun 20 19:48:50.954687 kernel: acpiphp: Slot [9] registered Jun 20 19:48:50.954696 kernel: acpiphp: Slot [10] registered Jun 20 19:48:50.954708 kernel: acpiphp: Slot [11] registered Jun 20 19:48:50.954717 kernel: acpiphp: Slot [12] registered Jun 20 19:48:50.954726 kernel: acpiphp: Slot [13] registered Jun 20 19:48:50.954735 kernel: acpiphp: Slot [14] registered Jun 20 19:48:50.954744 kernel: acpiphp: Slot [15] registered Jun 20 19:48:50.954753 kernel: acpiphp: Slot [16] registered Jun 20 19:48:50.954762 kernel: acpiphp: Slot [17] registered Jun 20 19:48:50.954771 kernel: acpiphp: Slot [18] registered Jun 20 19:48:50.954780 kernel: acpiphp: Slot [19] registered Jun 20 19:48:50.954790 kernel: acpiphp: Slot [20] registered Jun 20 19:48:50.954799 kernel: acpiphp: Slot [21] registered Jun 20 19:48:50.954821 kernel: acpiphp: Slot [22] registered Jun 20 19:48:50.954830 kernel: acpiphp: Slot [23] registered Jun 20 19:48:50.954839 kernel: acpiphp: Slot [24] registered Jun 20 19:48:50.954848 kernel: acpiphp: Slot [25] registered Jun 20 19:48:50.954857 kernel: acpiphp: Slot [26] registered Jun 20 19:48:50.954866 kernel: acpiphp: Slot [27] registered Jun 20 19:48:50.955258 kernel: acpiphp: Slot [28] registered Jun 20 19:48:50.955302 kernel: acpiphp: Slot [29] registered Jun 20 19:48:50.955343 kernel: acpiphp: Slot [30] registered Jun 20 19:48:50.955367 kernel: acpiphp: Slot [31] registered Jun 20 19:48:50.955392 kernel: PCI host bridge to bus 0000:00 Jun 20 19:48:50.955724 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:48:50.955931 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:48:50.956133 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:48:50.959606 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:48:50.959827 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jun 20 19:48:50.960018 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:48:50.960331 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:48:50.960541 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:48:50.960725 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jun 20 19:48:50.960895 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jun 20 19:48:50.961066 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 20 19:48:50.961750 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 20 19:48:50.961923 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 20 19:48:50.962086 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 20 19:48:50.962305 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 20 19:48:50.962484 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 20 19:48:50.962647 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 20 19:48:50.962859 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:48:50.963029 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jun 20 19:48:50.965076 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jun 20 19:48:50.965263 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jun 20 19:48:50.965360 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jun 20 19:48:50.965449 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:48:50.965545 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 20 19:48:50.965640 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jun 20 19:48:50.965727 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jun 20 19:48:50.965815 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jun 20 19:48:50.965902 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jun 20 19:48:50.965995 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 20 19:48:50.966084 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jun 20 19:48:50.966201 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jun 20 19:48:50.966300 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jun 20 19:48:50.966402 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jun 20 19:48:50.966491 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jun 20 19:48:50.966579 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jun 20 19:48:50.966672 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 20 19:48:50.966759 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jun 20 19:48:50.966862 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jun 20 19:48:50.966950 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jun 20 19:48:50.966964 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:48:50.966973 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:48:50.966983 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:48:50.966992 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:48:50.967001 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 20 19:48:50.967011 kernel: iommu: Default domain type: Translated Jun 20 19:48:50.967020 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:48:50.967033 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:48:50.967042 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:48:50.967051 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 20 19:48:50.967060 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jun 20 19:48:50.967152 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 20 19:48:50.969310 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 20 19:48:50.969404 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:48:50.969417 kernel: vgaarb: loaded Jun 20 19:48:50.969428 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:48:50.969441 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:48:50.969451 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:48:50.969460 kernel: pnp: PnP ACPI init Jun 20 19:48:50.969550 kernel: pnp 00:03: [dma 2] Jun 20 19:48:50.969564 kernel: pnp: PnP ACPI: found 5 devices Jun 20 19:48:50.969574 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:48:50.969583 kernel: NET: Registered PF_INET protocol family Jun 20 19:48:50.969593 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:48:50.969605 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 19:48:50.969615 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:48:50.969624 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:48:50.969633 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 19:48:50.969643 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 19:48:50.969652 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:48:50.969661 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:48:50.969670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:48:50.969679 kernel: NET: Registered PF_XDP protocol family Jun 20 19:48:50.969759 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:48:50.969837 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:48:50.969913 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:48:50.969989 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jun 20 19:48:50.970066 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jun 20 19:48:50.970156 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 20 19:48:50.971291 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 19:48:50.971308 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:48:50.971322 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:48:50.971333 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jun 20 19:48:50.971343 kernel: Initialise system trusted keyrings Jun 20 19:48:50.971354 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 19:48:50.971364 kernel: Key type asymmetric registered Jun 20 19:48:50.971374 kernel: Asymmetric key parser 'x509' registered Jun 20 19:48:50.971384 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:48:50.971394 kernel: io scheduler mq-deadline registered Jun 20 19:48:50.971405 kernel: io scheduler kyber registered Jun 20 19:48:50.971416 kernel: io scheduler bfq registered Jun 20 19:48:50.971426 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:48:50.971437 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 20 19:48:50.971447 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 20 19:48:50.971457 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 20 19:48:50.971467 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 20 19:48:50.971477 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:48:50.971487 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:48:50.971498 kernel: random: crng init done Jun 20 19:48:50.971509 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:48:50.971519 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:48:50.971529 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:48:50.971539 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:48:50.971645 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 20 19:48:50.971734 kernel: rtc_cmos 00:04: registered as rtc0 Jun 20 19:48:50.971820 kernel: rtc_cmos 00:04: setting system clock to 2025-06-20T19:48:50 UTC (1750448930) Jun 20 19:48:50.971906 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 20 19:48:50.971924 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 19:48:50.971935 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:48:50.971945 kernel: Segment Routing with IPv6 Jun 20 19:48:50.971955 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:48:50.971965 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:48:50.971974 kernel: Key type dns_resolver registered Jun 20 19:48:50.971984 kernel: IPI shorthand broadcast: enabled Jun 20 19:48:50.971994 kernel: sched_clock: Marking stable (3537007642, 174854367)->(3744150142, -32288133) Jun 20 19:48:50.972004 kernel: registered taskstats version 1 Jun 20 19:48:50.972016 kernel: Loading compiled-in X.509 certificates Jun 20 19:48:50.972026 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:48:50.972036 kernel: Demotion targets for Node 0: null Jun 20 19:48:50.972046 kernel: Key type .fscrypt registered Jun 20 19:48:50.972055 kernel: Key type fscrypt-provisioning registered Jun 20 19:48:50.972065 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:48:50.972075 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:48:50.972085 kernel: ima: No architecture policies found Jun 20 19:48:50.972096 kernel: clk: Disabling unused clocks Jun 20 19:48:50.972106 kernel: Warning: unable to open an initial console. Jun 20 19:48:50.972116 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:48:50.972127 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:48:50.972137 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:48:50.972146 kernel: Run /init as init process Jun 20 19:48:50.972156 kernel: with arguments: Jun 20 19:48:50.972780 kernel: /init Jun 20 19:48:50.972809 kernel: with environment: Jun 20 19:48:50.972825 kernel: HOME=/ Jun 20 19:48:50.972835 kernel: TERM=linux Jun 20 19:48:50.972845 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:48:50.972857 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:48:50.972873 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:48:50.972884 systemd[1]: Detected virtualization kvm. Jun 20 19:48:50.972894 systemd[1]: Detected architecture x86-64. Jun 20 19:48:50.972914 systemd[1]: Running in initrd. Jun 20 19:48:50.972927 systemd[1]: No hostname configured, using default hostname. Jun 20 19:48:50.972937 systemd[1]: Hostname set to . Jun 20 19:48:50.972948 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:48:50.972958 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:48:50.972968 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:48:50.972982 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:48:50.972993 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:48:50.973003 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:48:50.973014 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:48:50.973025 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:48:50.973037 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:48:50.973047 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:48:50.973059 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:48:50.973069 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:48:50.973079 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:48:50.973090 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:48:50.973100 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:48:50.973110 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:48:50.973120 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:48:50.973131 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:48:50.973141 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:48:50.973153 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:48:50.973163 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:48:50.973505 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:48:50.973516 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:48:50.973526 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:48:50.973537 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:48:50.973547 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:48:50.973557 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:48:50.973571 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:48:50.973582 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:48:50.973594 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:48:50.973604 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:48:50.973614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:48:50.973626 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:48:50.973637 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:48:50.973648 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:48:50.973701 systemd-journald[214]: Collecting audit messages is disabled. Jun 20 19:48:50.973730 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:48:50.973744 systemd-journald[214]: Journal started Jun 20 19:48:50.973768 systemd-journald[214]: Runtime Journal (/run/log/journal/2213dd2fa8cd431aa6327aeb23781d4d) is 8M, max 78.5M, 70.5M free. Jun 20 19:48:50.963236 systemd-modules-load[215]: Inserted module 'overlay' Jun 20 19:48:51.023900 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:48:51.023923 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:48:51.023937 kernel: Bridge firewalling registered Jun 20 19:48:50.997812 systemd-modules-load[215]: Inserted module 'br_netfilter' Jun 20 19:48:51.024728 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:48:51.025471 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:48:51.027282 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:48:51.031540 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:48:51.034296 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:48:51.042260 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:48:51.047269 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:48:51.058068 systemd-tmpfiles[232]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:48:51.062142 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:48:51.068013 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:48:51.069555 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:48:51.073265 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:48:51.075399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:48:51.077389 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:48:51.096122 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:48:51.120206 systemd-resolved[249]: Positive Trust Anchors: Jun 20 19:48:51.120217 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:48:51.120259 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:48:51.126623 systemd-resolved[249]: Defaulting to hostname 'linux'. Jun 20 19:48:51.128467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:48:51.129061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:48:51.176194 kernel: SCSI subsystem initialized Jun 20 19:48:51.186189 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:48:51.198186 kernel: iscsi: registered transport (tcp) Jun 20 19:48:51.220558 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:48:51.220663 kernel: QLogic iSCSI HBA Driver Jun 20 19:48:51.248754 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:48:51.260049 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:48:51.261269 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:48:51.349734 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:48:51.355007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:48:51.452278 kernel: raid6: sse2x4 gen() 5037 MB/s Jun 20 19:48:51.470269 kernel: raid6: sse2x2 gen() 14761 MB/s Jun 20 19:48:51.488633 kernel: raid6: sse2x1 gen() 9984 MB/s Jun 20 19:48:51.488695 kernel: raid6: using algorithm sse2x2 gen() 14761 MB/s Jun 20 19:48:51.507664 kernel: raid6: .... xor() 9213 MB/s, rmw enabled Jun 20 19:48:51.507733 kernel: raid6: using ssse3x2 recovery algorithm Jun 20 19:48:51.530349 kernel: xor: measuring software checksum speed Jun 20 19:48:51.530418 kernel: prefetch64-sse : 18498 MB/sec Jun 20 19:48:51.533859 kernel: generic_sse : 14509 MB/sec Jun 20 19:48:51.533921 kernel: xor: using function: prefetch64-sse (18498 MB/sec) Jun 20 19:48:51.733235 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:48:51.741901 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:48:51.747442 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:48:51.775423 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jun 20 19:48:51.781363 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:48:51.788440 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:48:51.810875 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jun 20 19:48:51.843301 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:48:51.848300 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:48:51.898864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:48:51.904034 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:48:51.984776 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jun 20 19:48:51.992215 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jun 20 19:48:52.010188 kernel: libata version 3.00 loaded. Jun 20 19:48:52.013652 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 20 19:48:52.013681 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 20 19:48:52.018258 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:48:52.018291 kernel: GPT:17805311 != 20971519 Jun 20 19:48:52.018303 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:48:52.019746 kernel: GPT:17805311 != 20971519 Jun 20 19:48:52.020662 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:48:52.021875 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:48:52.026189 kernel: scsi host0: ata_piix Jun 20 19:48:52.026725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:48:52.026897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:48:52.028945 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:48:52.031092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:48:52.033161 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:48:52.042743 kernel: scsi host1: ata_piix Jun 20 19:48:52.043578 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jun 20 19:48:52.043594 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jun 20 19:48:52.101301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:48:52.286135 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:48:52.297474 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 20 19:48:52.308248 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:48:52.318767 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 20 19:48:52.327256 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 20 19:48:52.327859 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 20 19:48:52.330980 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:48:52.333402 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:48:52.335643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:48:52.338443 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:48:52.342278 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:48:52.366673 disk-uuid[568]: Primary Header is updated. Jun 20 19:48:52.366673 disk-uuid[568]: Secondary Entries is updated. Jun 20 19:48:52.366673 disk-uuid[568]: Secondary Header is updated. Jun 20 19:48:52.381181 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:48:52.393707 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:48:53.402244 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:48:53.403820 disk-uuid[569]: The operation has completed successfully. Jun 20 19:48:53.466837 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:48:53.467824 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:48:53.499735 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:48:53.535748 sh[587]: Success Jun 20 19:48:53.587147 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:48:53.587295 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:48:53.593213 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:48:53.613214 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jun 20 19:48:53.678835 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:48:53.683341 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:48:53.685270 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:48:53.721303 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:48:53.721372 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (600) Jun 20 19:48:53.735245 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:48:53.735311 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:48:53.740800 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:48:53.760568 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:48:53.763933 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:48:53.765544 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:48:53.767266 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:48:53.775399 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:48:53.818278 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (632) Jun 20 19:48:53.826462 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:48:53.826493 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:48:53.830723 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:48:53.845209 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:48:53.846261 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:48:53.849283 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:48:53.881120 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:48:53.883546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:48:53.931109 systemd-networkd[771]: lo: Link UP Jun 20 19:48:53.931879 systemd-networkd[771]: lo: Gained carrier Jun 20 19:48:53.933002 systemd-networkd[771]: Enumeration completed Jun 20 19:48:53.934074 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:48:53.934721 systemd[1]: Reached target network.target - Network. Jun 20 19:48:53.935573 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:48:53.935577 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:48:53.938142 systemd-networkd[771]: eth0: Link UP Jun 20 19:48:53.938146 systemd-networkd[771]: eth0: Gained carrier Jun 20 19:48:53.938155 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:48:53.950229 systemd-networkd[771]: eth0: DHCPv4 address 172.24.4.123/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 20 19:48:54.043343 ignition[726]: Ignition 2.21.0 Jun 20 19:48:54.043360 ignition[726]: Stage: fetch-offline Jun 20 19:48:54.043390 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:48:54.044940 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:48:54.043399 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:48:54.043477 ignition[726]: parsed url from cmdline: "" Jun 20 19:48:54.047425 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:48:54.043481 ignition[726]: no config URL provided Jun 20 19:48:54.047502 systemd-resolved[249]: Detected conflict on linux IN A 172.24.4.123 Jun 20 19:48:54.043487 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:48:54.047510 systemd-resolved[249]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jun 20 19:48:54.043494 ignition[726]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:48:54.043499 ignition[726]: failed to fetch config: resource requires networking Jun 20 19:48:54.043652 ignition[726]: Ignition finished successfully Jun 20 19:48:54.084147 ignition[782]: Ignition 2.21.0 Jun 20 19:48:54.085585 ignition[782]: Stage: fetch Jun 20 19:48:54.085864 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:48:54.085883 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:48:54.086031 ignition[782]: parsed url from cmdline: "" Jun 20 19:48:54.086038 ignition[782]: no config URL provided Jun 20 19:48:54.086048 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:48:54.086062 ignition[782]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:48:54.086280 ignition[782]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jun 20 19:48:54.086367 ignition[782]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jun 20 19:48:54.087118 ignition[782]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jun 20 19:48:54.278610 ignition[782]: GET result: OK Jun 20 19:48:54.278901 ignition[782]: parsing config with SHA512: ef023660eebdbe4cca36c2dc45f52e31218d15c11609b9609e4cfba143cef335ba5543330865cf7d2922d59353717e706c6c9ba1bb0b0c8ae44b23ef23beb9bb Jun 20 19:48:54.289576 unknown[782]: fetched base config from "system" Jun 20 19:48:54.289601 unknown[782]: fetched base config from "system" Jun 20 19:48:54.290467 ignition[782]: fetch: fetch complete Jun 20 19:48:54.289615 unknown[782]: fetched user config from "openstack" Jun 20 19:48:54.290480 ignition[782]: fetch: fetch passed Jun 20 19:48:54.295889 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:48:54.290568 ignition[782]: Ignition finished successfully Jun 20 19:48:54.300454 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:48:54.366773 ignition[789]: Ignition 2.21.0 Jun 20 19:48:54.366829 ignition[789]: Stage: kargs Jun 20 19:48:54.367256 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:48:54.367286 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:48:54.369423 ignition[789]: kargs: kargs passed Jun 20 19:48:54.372027 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:48:54.369515 ignition[789]: Ignition finished successfully Jun 20 19:48:54.376864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:48:54.425546 ignition[795]: Ignition 2.21.0 Jun 20 19:48:54.425580 ignition[795]: Stage: disks Jun 20 19:48:54.425919 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:48:54.425943 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:48:54.431233 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:48:54.427951 ignition[795]: disks: disks passed Jun 20 19:48:54.434763 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:48:54.428050 ignition[795]: Ignition finished successfully Jun 20 19:48:54.437101 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:48:54.439692 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:48:54.442585 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:48:54.445077 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:48:54.450402 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:48:54.511960 systemd-fsck[803]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jun 20 19:48:54.526291 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:48:54.530405 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:48:54.742217 kernel: EXT4-fs (vda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:48:54.743280 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:48:54.744822 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:48:54.748631 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:48:54.769628 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:48:54.773765 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:48:54.778454 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jun 20 19:48:54.784400 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:48:54.809632 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (811) Jun 20 19:48:54.809681 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:48:54.809712 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:48:54.809742 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:48:54.784470 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:48:54.812443 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:48:54.813918 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:48:54.821094 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:48:54.954211 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:48:54.956073 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:48:54.962532 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:48:54.969199 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:48:54.974037 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:48:55.075396 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:48:55.077585 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:48:55.078980 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:48:55.092075 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:48:55.094985 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:48:55.117367 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:48:55.126820 ignition[931]: INFO : Ignition 2.21.0 Jun 20 19:48:55.126820 ignition[931]: INFO : Stage: mount Jun 20 19:48:55.128948 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:48:55.128948 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:48:55.128948 ignition[931]: INFO : mount: mount passed Jun 20 19:48:55.128948 ignition[931]: INFO : Ignition finished successfully Jun 20 19:48:55.129229 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:48:55.339612 systemd-networkd[771]: eth0: Gained IPv6LL Jun 20 19:48:55.989237 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:48:58.003224 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:49:02.016244 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:49:02.026603 coreos-metadata[813]: Jun 20 19:49:02.026 WARN failed to locate config-drive, using the metadata service API instead Jun 20 19:49:02.068786 coreos-metadata[813]: Jun 20 19:49:02.068 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 20 19:49:02.080401 coreos-metadata[813]: Jun 20 19:49:02.080 INFO Fetch successful Jun 20 19:49:02.081762 coreos-metadata[813]: Jun 20 19:49:02.081 INFO wrote hostname ci-4344-1-0-0-4524070979.novalocal to /sysroot/etc/hostname Jun 20 19:49:02.084661 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jun 20 19:49:02.084908 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jun 20 19:49:02.093392 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:49:02.123981 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:49:02.162255 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (948) Jun 20 19:49:02.169471 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:49:02.169538 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:49:02.173330 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:49:02.186065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:49:02.238725 ignition[966]: INFO : Ignition 2.21.0 Jun 20 19:49:02.238725 ignition[966]: INFO : Stage: files Jun 20 19:49:02.241754 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:49:02.241754 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:49:02.241754 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:49:02.247426 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:49:02.247426 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:49:02.251947 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:49:02.251947 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:49:02.251947 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:49:02.250069 unknown[966]: wrote ssh authorized keys file for user: core Jun 20 19:49:02.260205 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 19:49:02.260205 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jun 20 19:49:02.324695 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:49:02.675816 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 19:49:02.675816 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:49:02.680411 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:49:02.680411 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:49:02.680411 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:49:02.680411 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:49:02.680411 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:49:02.680411 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:49:02.680411 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:49:02.694397 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:49:02.694397 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:49:02.694397 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:49:02.694397 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:49:02.694397 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:49:02.694397 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jun 20 19:49:03.365943 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:49:05.041658 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:49:05.041658 ignition[966]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 19:49:05.046901 ignition[966]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:49:05.051195 ignition[966]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:49:05.051195 ignition[966]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 19:49:05.051195 ignition[966]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:49:05.059759 ignition[966]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:49:05.059759 ignition[966]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:49:05.059759 ignition[966]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:49:05.059759 ignition[966]: INFO : files: files passed Jun 20 19:49:05.059759 ignition[966]: INFO : Ignition finished successfully Jun 20 19:49:05.053529 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:49:05.058303 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:49:05.062324 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:49:05.077402 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:49:05.077508 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:49:05.086717 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:49:05.086717 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:49:05.091869 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:49:05.090126 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:49:05.092867 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:49:05.095957 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:49:05.163412 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:49:05.163639 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:49:05.166766 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:49:05.169542 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:49:05.172540 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:49:05.175304 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:49:05.229062 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:49:05.234309 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:49:05.271399 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:49:05.273116 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:49:05.276369 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:49:05.279341 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:49:05.279736 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:49:05.282724 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:49:05.284587 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:49:05.287612 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:49:05.290210 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:49:05.293083 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:49:05.296321 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:49:05.299306 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:49:05.302349 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:49:05.305505 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:49:05.308327 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:49:05.311499 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:49:05.314115 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:49:05.314550 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:49:05.317555 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:49:05.319517 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:49:05.322013 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:49:05.322813 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:49:05.325149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:49:05.325465 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:49:05.329538 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:49:05.329957 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:49:05.333244 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:49:05.333625 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:49:05.339518 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:49:05.350530 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:49:05.353404 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:49:05.354875 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:49:05.358876 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:49:05.360886 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:49:05.371128 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:49:05.371230 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:49:05.382436 ignition[1019]: INFO : Ignition 2.21.0 Jun 20 19:49:05.384837 ignition[1019]: INFO : Stage: umount Jun 20 19:49:05.384837 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:49:05.384837 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:49:05.384837 ignition[1019]: INFO : umount: umount passed Jun 20 19:49:05.384837 ignition[1019]: INFO : Ignition finished successfully Jun 20 19:49:05.385072 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:49:05.385198 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:49:05.386278 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:49:05.386345 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:49:05.388561 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:49:05.388612 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:49:05.389116 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:49:05.389154 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:49:05.389654 systemd[1]: Stopped target network.target - Network. Jun 20 19:49:05.390089 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:49:05.390130 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:49:05.394318 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:49:05.395075 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:49:05.395759 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:49:05.396869 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:49:05.397336 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:49:05.398347 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:49:05.398379 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:49:05.399402 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:49:05.399433 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:49:05.400294 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:49:05.400350 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:49:05.401186 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:49:05.401226 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:49:05.402444 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:49:05.403735 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:49:05.405635 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:49:05.406321 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:49:05.406412 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:49:05.408751 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:49:05.408839 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:49:05.411890 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:49:05.412119 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:49:05.412242 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:49:05.413976 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:49:05.414873 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:49:05.418159 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:49:05.418214 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:49:05.419213 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:49:05.419263 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:49:05.421058 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:49:05.422451 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:49:05.422494 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:49:05.424087 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:49:05.424127 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:49:05.426289 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:49:05.426330 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:49:05.427562 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:49:05.427605 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:49:05.429103 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:49:05.431747 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:49:05.431810 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:49:05.436859 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:49:05.437399 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:49:05.438495 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:49:05.438532 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:49:05.440595 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:49:05.440626 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:49:05.441101 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:49:05.441143 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:49:05.442032 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:49:05.442069 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:49:05.443233 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:49:05.443280 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:49:05.445095 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:49:05.446807 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:49:05.446874 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:49:05.449413 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:49:05.449459 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:49:05.450959 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 19:49:05.451015 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:49:05.452267 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:49:05.452305 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:49:05.453269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:49:05.453307 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:49:05.456323 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:49:05.456374 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jun 20 19:49:05.456412 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:49:05.456456 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:49:05.456784 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:49:05.459268 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:49:05.463285 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:49:05.463374 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:49:05.464460 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:49:05.466087 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:49:05.481373 systemd[1]: Switching root. Jun 20 19:49:05.523207 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). Jun 20 19:49:05.523308 systemd-journald[214]: Journal stopped Jun 20 19:49:07.793402 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:49:07.793473 kernel: SELinux: policy capability open_perms=1 Jun 20 19:49:07.793489 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:49:07.793500 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:49:07.793512 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:49:07.793523 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:49:07.793537 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:49:07.793549 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:49:07.793560 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:49:07.793572 kernel: audit: type=1403 audit(1750448946.321:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:49:07.793592 systemd[1]: Successfully loaded SELinux policy in 80.424ms. Jun 20 19:49:07.793616 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.725ms. Jun 20 19:49:07.793630 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:49:07.793643 systemd[1]: Detected virtualization kvm. Jun 20 19:49:07.793655 systemd[1]: Detected architecture x86-64. Jun 20 19:49:07.793669 systemd[1]: Detected first boot. Jun 20 19:49:07.793682 systemd[1]: Hostname set to . Jun 20 19:49:07.793694 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:49:07.793706 zram_generator::config[1063]: No configuration found. Jun 20 19:49:07.793719 kernel: Guest personality initialized and is inactive Jun 20 19:49:07.793730 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:49:07.793742 kernel: Initialized host personality Jun 20 19:49:07.793753 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:49:07.793767 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:49:07.793780 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:49:07.793792 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:49:07.793805 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:49:07.793817 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:49:07.793830 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:49:07.793842 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:49:07.793860 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:49:07.793872 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:49:07.793887 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:49:07.793900 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:49:07.793912 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:49:07.793924 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:49:07.793937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:49:07.793949 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:49:07.793961 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:49:07.793973 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:49:07.793988 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:49:07.794001 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:49:07.794014 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:49:07.794026 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:49:07.794038 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:49:07.794050 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:49:07.794065 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:49:07.794079 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:49:07.794092 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:49:07.794104 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:49:07.794117 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:49:07.794129 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:49:07.794142 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:49:07.794154 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:49:07.802226 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:49:07.804080 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:49:07.804108 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:49:07.804121 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:49:07.804134 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:49:07.804146 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:49:07.804159 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:49:07.804200 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:49:07.804214 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:49:07.804227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:49:07.804239 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:49:07.804254 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:49:07.804266 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:49:07.804279 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:49:07.804291 systemd[1]: Reached target machines.target - Containers. Jun 20 19:49:07.804303 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:49:07.804315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:49:07.804327 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:49:07.804339 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:49:07.804353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:49:07.804367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:49:07.804379 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:49:07.804391 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:49:07.804403 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:49:07.804417 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:49:07.804435 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:49:07.804448 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:49:07.804461 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:49:07.804473 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:49:07.804486 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:49:07.804499 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:49:07.804513 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:49:07.804525 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:49:07.804539 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:49:07.804551 kernel: loop: module loaded Jun 20 19:49:07.804565 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:49:07.804578 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:49:07.804590 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:49:07.804604 systemd[1]: Stopped verity-setup.service. Jun 20 19:49:07.804617 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:49:07.804629 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:49:07.804641 kernel: fuse: init (API version 7.41) Jun 20 19:49:07.804652 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:49:07.804665 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:49:07.804680 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:49:07.804693 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:49:07.804707 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:49:07.804719 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:49:07.804731 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:49:07.804743 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:49:07.804755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:49:07.804767 kernel: ACPI: bus type drm_connector registered Jun 20 19:49:07.804779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:49:07.804791 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:49:07.804803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:49:07.804817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:49:07.804829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:49:07.804841 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:49:07.804853 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:49:07.804866 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:49:07.804909 systemd-journald[1150]: Collecting audit messages is disabled. Jun 20 19:49:07.804944 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:49:07.804958 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:49:07.804971 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:49:07.804983 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:49:07.804997 systemd-journald[1150]: Journal started Jun 20 19:49:07.805023 systemd-journald[1150]: Runtime Journal (/run/log/journal/2213dd2fa8cd431aa6327aeb23781d4d) is 8M, max 78.5M, 70.5M free. Jun 20 19:49:07.416624 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:49:07.435282 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 20 19:49:07.435691 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:49:07.821125 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:49:07.822223 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:49:07.825207 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:49:07.834238 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:49:07.838288 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:49:07.842296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:49:07.855909 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:49:07.855969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:49:07.870212 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:49:07.870272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:49:07.873196 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:49:07.881198 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:49:07.892206 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:49:07.898379 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:49:07.900726 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:49:07.902271 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:49:07.903665 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:49:07.914260 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:49:07.922435 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:49:07.923435 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:49:07.925244 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:49:07.930345 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:49:07.936224 kernel: loop0: detected capacity change from 0 to 229808 Jun 20 19:49:07.950261 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:49:07.950886 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:49:07.956541 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:49:07.961668 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:49:07.979614 systemd-journald[1150]: Time spent on flushing to /var/log/journal/2213dd2fa8cd431aa6327aeb23781d4d is 23.936ms for 982 entries. Jun 20 19:49:07.979614 systemd-journald[1150]: System Journal (/var/log/journal/2213dd2fa8cd431aa6327aeb23781d4d) is 8M, max 584.8M, 576.8M free. Jun 20 19:49:08.077848 systemd-journald[1150]: Received client request to flush runtime journal. Jun 20 19:49:07.985466 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jun 20 19:49:07.985480 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jun 20 19:49:07.991465 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:49:07.994416 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:49:08.082036 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:49:08.086637 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:49:08.087210 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:49:08.105261 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:49:08.108861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:49:08.118069 kernel: loop1: detected capacity change from 0 to 8 Jun 20 19:49:08.141202 kernel: loop2: detected capacity change from 0 to 146240 Jun 20 19:49:08.147034 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jun 20 19:49:08.147054 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jun 20 19:49:08.152686 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:49:08.189217 kernel: loop3: detected capacity change from 0 to 113872 Jun 20 19:49:08.253211 kernel: loop4: detected capacity change from 0 to 229808 Jun 20 19:49:08.438450 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:49:08.545189 kernel: loop5: detected capacity change from 0 to 8 Jun 20 19:49:08.553201 kernel: loop6: detected capacity change from 0 to 146240 Jun 20 19:49:08.613204 kernel: loop7: detected capacity change from 0 to 113872 Jun 20 19:49:08.662998 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jun 20 19:49:08.663900 (sd-merge)[1228]: Merged extensions into '/usr'. Jun 20 19:49:08.671459 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:49:08.671478 systemd[1]: Reloading... Jun 20 19:49:08.782644 zram_generator::config[1252]: No configuration found. Jun 20 19:49:08.910198 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:49:09.016569 systemd[1]: Reloading finished in 344 ms. Jun 20 19:49:09.042913 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:49:09.051372 systemd[1]: Starting ensure-sysext.service... Jun 20 19:49:09.053766 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:49:09.063373 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:49:09.068456 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:49:09.073313 systemd[1]: Reload requested from client PID 1309 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:49:09.073330 systemd[1]: Reloading... Jun 20 19:49:09.094560 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:49:09.094869 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:49:09.095585 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:49:09.096443 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:49:09.097344 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:49:09.097630 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jun 20 19:49:09.097683 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jun 20 19:49:09.106916 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:49:09.106926 systemd-tmpfiles[1310]: Skipping /boot Jun 20 19:49:09.120589 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Jun 20 19:49:09.126157 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:49:09.126194 systemd-tmpfiles[1310]: Skipping /boot Jun 20 19:49:09.162201 zram_generator::config[1339]: No configuration found. Jun 20 19:49:09.230740 ldconfig[1172]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:49:09.358153 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:49:09.425233 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:49:09.453194 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:49:09.502113 systemd[1]: Reloading finished in 428 ms. Jun 20 19:49:09.508196 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 20 19:49:09.517346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:49:09.519575 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:49:09.529891 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:49:09.538402 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:49:09.547115 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:49:09.548392 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:49:09.550601 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:49:09.556395 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:49:09.562141 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:49:09.568973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:49:09.575580 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:49:09.586214 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 19:49:09.589441 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:49:09.589757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:49:09.591543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:49:09.597286 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:49:09.599563 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:49:09.600215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:49:09.600334 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:49:09.602472 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:49:09.604216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:49:09.612671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:49:09.613076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:49:09.613404 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:49:09.614381 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:49:09.614573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:49:09.620240 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:49:09.624815 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:49:09.625676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:49:09.629263 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:49:09.630552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:49:09.631344 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:49:09.631521 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:49:09.633806 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:49:09.642456 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:49:09.645072 systemd[1]: Finished ensure-sysext.service. Jun 20 19:49:09.657423 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:49:09.664495 augenrules[1478]: No rules Jun 20 19:49:09.667158 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:49:09.668354 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:49:09.673386 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:49:09.673858 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:49:09.686123 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:49:09.699926 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:49:09.700141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:49:09.701315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:49:09.701534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:49:09.702629 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:49:09.702787 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:49:09.704635 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:49:09.704845 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:49:09.738553 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:49:09.739329 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:49:09.757437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:49:09.766197 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 20 19:49:09.769973 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 20 19:49:09.769060 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:49:09.773440 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:49:09.784319 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:49:09.792752 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 20 19:49:09.792816 kernel: [drm] features: -context_init Jun 20 19:49:09.794318 kernel: [drm] number of scanouts: 1 Jun 20 19:49:09.794344 kernel: [drm] number of cap sets: 0 Jun 20 19:49:09.801715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:49:09.801907 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:49:09.806131 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jun 20 19:49:09.804840 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:49:09.807264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:49:09.820226 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:49:09.856858 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:49:09.951055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:49:09.961393 systemd-networkd[1443]: lo: Link UP Jun 20 19:49:09.961666 systemd-networkd[1443]: lo: Gained carrier Jun 20 19:49:09.963027 systemd-networkd[1443]: Enumeration completed Jun 20 19:49:09.963193 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:49:09.964523 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:49:09.964772 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:49:09.965426 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:49:09.966693 systemd-networkd[1443]: eth0: Link UP Jun 20 19:49:09.966926 systemd-networkd[1443]: eth0: Gained carrier Jun 20 19:49:09.967015 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:49:09.968056 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:49:09.982345 systemd-networkd[1443]: eth0: DHCPv4 address 172.24.4.123/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 20 19:49:09.992207 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:49:09.992366 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:49:10.004209 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:49:10.004862 systemd-resolved[1444]: Positive Trust Anchors: Jun 20 19:49:10.005251 systemd-resolved[1444]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:49:10.005296 systemd-resolved[1444]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:49:10.011425 systemd-resolved[1444]: Using system hostname 'ci-4344-1-0-0-4524070979.novalocal'. Jun 20 19:49:10.013023 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:49:10.013241 systemd[1]: Reached target network.target - Network. Jun 20 19:49:10.013300 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:49:10.013356 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:49:10.013474 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:49:10.013549 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:49:10.013607 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:49:10.013778 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:49:10.013889 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:49:10.013945 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:49:10.013994 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:49:10.014024 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:49:10.014073 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:49:10.015821 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:49:10.016998 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:49:10.019689 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:49:10.019899 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:49:10.019988 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:49:10.022614 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:49:10.022961 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:49:10.023693 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:49:10.024491 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:49:10.024601 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:49:10.024762 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:49:10.024808 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:49:10.025807 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:49:10.029299 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:49:10.039892 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:49:10.040783 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:49:10.041564 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:49:10.042188 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:49:10.042805 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:49:10.042885 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:49:10.046363 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:49:10.049356 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:49:10.050641 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:49:10.060128 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:49:10.062187 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:49:10.069690 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:49:10.071263 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:49:10.071765 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:49:10.073936 extend-filesystems[1528]: Found /dev/vda6 Jun 20 19:49:10.074515 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:49:10.077242 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:49:10.083460 extend-filesystems[1528]: Found /dev/vda9 Jun 20 19:49:10.086656 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:49:10.086960 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:49:10.088404 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:49:10.090076 extend-filesystems[1528]: Checking size of /dev/vda9 Jun 20 19:49:10.096304 jq[1526]: false Jun 20 19:49:10.098592 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing passwd entry cache Jun 20 19:49:10.098562 oslogin_cache_refresh[1529]: Refreshing passwd entry cache Jun 20 19:49:10.104401 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:49:10.104593 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:49:10.109473 jq[1540]: true Jun 20 19:49:10.115355 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting users, quitting Jun 20 19:49:10.115355 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:49:10.115355 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing group entry cache Jun 20 19:49:10.115256 oslogin_cache_refresh[1529]: Failure getting users, quitting Jun 20 19:49:10.115273 oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:49:10.115319 oslogin_cache_refresh[1529]: Refreshing group entry cache Jun 20 19:49:10.120326 systemd-timesyncd[1476]: Contacted time server 74.208.117.38:123 (0.flatcar.pool.ntp.org). Jun 20 19:49:10.120397 systemd-timesyncd[1476]: Initial clock synchronization to Fri 2025-06-20 19:49:10.184769 UTC. Jun 20 19:49:10.120990 update_engine[1538]: I20250620 19:49:10.120929 1538 main.cc:92] Flatcar Update Engine starting Jun 20 19:49:10.123417 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting groups, quitting Jun 20 19:49:10.123417 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:49:10.123406 oslogin_cache_refresh[1529]: Failure getting groups, quitting Jun 20 19:49:10.123419 oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:49:10.130559 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:49:10.132229 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:49:10.136877 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:49:10.142782 tar[1542]: linux-amd64/LICENSE Jun 20 19:49:10.142782 tar[1542]: linux-amd64/helm Jun 20 19:49:10.143026 extend-filesystems[1528]: Resized partition /dev/vda9 Jun 20 19:49:10.143116 jq[1555]: true Jun 20 19:49:10.154199 extend-filesystems[1569]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 19:49:10.170206 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jun 20 19:49:10.171067 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:49:10.171298 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:49:10.184214 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jun 20 19:49:10.239769 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 20 19:49:10.239769 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 19:49:10.239769 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jun 20 19:49:10.240599 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Jun 20 19:49:10.241122 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:49:10.241499 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:49:10.246285 dbus-daemon[1523]: [system] SELinux support is enabled Jun 20 19:49:10.248363 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:49:10.252117 update_engine[1538]: I20250620 19:49:10.251978 1538 update_check_scheduler.cc:74] Next update check in 2m28s Jun 20 19:49:10.252866 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:49:10.252897 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:49:10.253257 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:49:10.253277 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:49:10.254640 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:49:10.261957 bash[1585]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:49:10.262666 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:49:10.263132 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:49:10.270847 systemd-logind[1537]: New seat seat0. Jun 20 19:49:10.271136 systemd[1]: Starting sshkeys.service... Jun 20 19:49:10.285347 systemd-logind[1537]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:49:10.285370 systemd-logind[1537]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:49:10.285537 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:49:10.311493 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:49:10.313154 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:49:10.353784 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:49:10.366189 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:49:10.459213 locksmithd[1590]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:49:10.625012 containerd[1551]: time="2025-06-20T19:49:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:49:10.628551 containerd[1551]: time="2025-06-20T19:49:10.628508082Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.655478635Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.897µs" Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.655509463Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.655528048Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.655673070Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.655690483Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.655715800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.655775793Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.655790180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.656046080Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.656063132Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.656074503Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:49:10.657179 containerd[1551]: time="2025-06-20T19:49:10.656083891Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:49:10.659208 containerd[1551]: time="2025-06-20T19:49:10.656161987Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:49:10.659475 containerd[1551]: time="2025-06-20T19:49:10.659455143Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:49:10.660230 containerd[1551]: time="2025-06-20T19:49:10.660211943Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:49:10.660286 containerd[1551]: time="2025-06-20T19:49:10.660273689Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:49:10.660368 containerd[1551]: time="2025-06-20T19:49:10.660352526Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:49:10.660683 containerd[1551]: time="2025-06-20T19:49:10.660664512Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:49:10.662397 containerd[1551]: time="2025-06-20T19:49:10.662239826Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:49:10.672526 containerd[1551]: time="2025-06-20T19:49:10.672504998Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:49:10.673231 containerd[1551]: time="2025-06-20T19:49:10.673215170Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:49:10.673326 containerd[1551]: time="2025-06-20T19:49:10.673311271Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:49:10.673416 containerd[1551]: time="2025-06-20T19:49:10.673399807Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:49:10.673533 containerd[1551]: time="2025-06-20T19:49:10.673516135Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:49:10.673608 containerd[1551]: time="2025-06-20T19:49:10.673593901Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:49:10.673678 containerd[1551]: time="2025-06-20T19:49:10.673663902Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:49:10.673755 containerd[1551]: time="2025-06-20T19:49:10.673741097Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:49:10.673839 containerd[1551]: time="2025-06-20T19:49:10.673824974Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:49:10.673921 containerd[1551]: time="2025-06-20T19:49:10.673905405Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:49:10.673987 containerd[1551]: time="2025-06-20T19:49:10.673973102Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:49:10.674064 containerd[1551]: time="2025-06-20T19:49:10.674049936Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676274348Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676301388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676321055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676350721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676365439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676376169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676388522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676398671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676410102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676440690Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676459064Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676537451Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676557339Z" level=info msg="Start snapshots syncer" Jun 20 19:49:10.677108 containerd[1551]: time="2025-06-20T19:49:10.676582726Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:49:10.677476 containerd[1551]: time="2025-06-20T19:49:10.676886897Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:49:10.677476 containerd[1551]: time="2025-06-20T19:49:10.676963711Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:49:10.677589 containerd[1551]: time="2025-06-20T19:49:10.677053659Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:49:10.677881 containerd[1551]: time="2025-06-20T19:49:10.677687959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:49:10.677881 containerd[1551]: time="2025-06-20T19:49:10.677737923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:49:10.677881 containerd[1551]: time="2025-06-20T19:49:10.677749745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:49:10.677881 containerd[1551]: time="2025-06-20T19:49:10.677770113Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:49:10.677881 containerd[1551]: time="2025-06-20T19:49:10.677806161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:49:10.677881 containerd[1551]: time="2025-06-20T19:49:10.677820337Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:49:10.677881 containerd[1551]: time="2025-06-20T19:49:10.677831969Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:49:10.677881 containerd[1551]: time="2025-06-20T19:49:10.677854541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:49:10.678145 containerd[1551]: time="2025-06-20T19:49:10.677865873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:49:10.678145 containerd[1551]: time="2025-06-20T19:49:10.678101034Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:49:10.678260 containerd[1551]: time="2025-06-20T19:49:10.678243200Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:49:10.678347 containerd[1551]: time="2025-06-20T19:49:10.678327619Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:49:10.678421 containerd[1551]: time="2025-06-20T19:49:10.678407879Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:49:10.678502 containerd[1551]: time="2025-06-20T19:49:10.678469926Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:49:10.678580 containerd[1551]: time="2025-06-20T19:49:10.678549515Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:49:10.678666 containerd[1551]: time="2025-06-20T19:49:10.678633663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:49:10.678738 containerd[1551]: time="2025-06-20T19:49:10.678712811Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:49:10.678814 containerd[1551]: time="2025-06-20T19:49:10.678802018Z" level=info msg="runtime interface created" Jun 20 19:49:10.679362 containerd[1551]: time="2025-06-20T19:49:10.679199845Z" level=info msg="created NRI interface" Jun 20 19:49:10.679362 containerd[1551]: time="2025-06-20T19:49:10.679216987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:49:10.679362 containerd[1551]: time="2025-06-20T19:49:10.679230562Z" level=info msg="Connect containerd service" Jun 20 19:49:10.679362 containerd[1551]: time="2025-06-20T19:49:10.679255399Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:49:10.682287 containerd[1551]: time="2025-06-20T19:49:10.681879671Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:49:10.811065 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:49:10.836722 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:49:10.841414 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:49:10.847613 systemd[1]: Started sshd@0-172.24.4.123:22-172.24.4.1:35298.service - OpenSSH per-connection server daemon (172.24.4.1:35298). Jun 20 19:49:10.872191 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:49:10.873982 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:49:10.877511 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:49:10.918215 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:49:10.922872 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:49:10.927373 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:49:10.927692 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:49:10.948721 containerd[1551]: time="2025-06-20T19:49:10.948676829Z" level=info msg="Start subscribing containerd event" Jun 20 19:49:10.948834 containerd[1551]: time="2025-06-20T19:49:10.948728586Z" level=info msg="Start recovering state" Jun 20 19:49:10.948834 containerd[1551]: time="2025-06-20T19:49:10.948829204Z" level=info msg="Start event monitor" Jun 20 19:49:10.948886 containerd[1551]: time="2025-06-20T19:49:10.948845966Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:49:10.948886 containerd[1551]: time="2025-06-20T19:49:10.948854121Z" level=info msg="Start streaming server" Jun 20 19:49:10.948886 containerd[1551]: time="2025-06-20T19:49:10.948868809Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:49:10.948886 containerd[1551]: time="2025-06-20T19:49:10.948877715Z" level=info msg="runtime interface starting up..." Jun 20 19:49:10.948995 containerd[1551]: time="2025-06-20T19:49:10.948888836Z" level=info msg="starting plugins..." Jun 20 19:49:10.948995 containerd[1551]: time="2025-06-20T19:49:10.948904255Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:49:10.949512 containerd[1551]: time="2025-06-20T19:49:10.949308513Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:49:10.950395 containerd[1551]: time="2025-06-20T19:49:10.950268744Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:49:10.950395 containerd[1551]: time="2025-06-20T19:49:10.950375064Z" level=info msg="containerd successfully booted in 0.325715s" Jun 20 19:49:10.950479 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:49:10.988439 tar[1542]: linux-amd64/README.md Jun 20 19:49:11.004630 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:49:11.060237 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:49:11.414251 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:49:11.851569 systemd-networkd[1443]: eth0: Gained IPv6LL Jun 20 19:49:11.855868 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:49:11.857659 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:49:11.862971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:49:11.867882 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:49:11.923571 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:49:12.069469 sshd[1624]: Accepted publickey for core from 172.24.4.1 port 35298 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:49:12.074061 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:49:12.111591 systemd-logind[1537]: New session 1 of user core. Jun 20 19:49:12.114709 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:49:12.118259 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:49:12.142772 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:49:12.145310 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:49:12.154477 (systemd)[1656]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:49:12.164481 systemd-logind[1537]: New session c1 of user core. Jun 20 19:49:12.320617 systemd[1656]: Queued start job for default target default.target. Jun 20 19:49:12.324437 systemd[1656]: Created slice app.slice - User Application Slice. Jun 20 19:49:12.324556 systemd[1656]: Reached target paths.target - Paths. Jun 20 19:49:12.324597 systemd[1656]: Reached target timers.target - Timers. Jun 20 19:49:12.327253 systemd[1656]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:49:12.337381 systemd[1656]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:49:12.337580 systemd[1656]: Reached target sockets.target - Sockets. Jun 20 19:49:12.337700 systemd[1656]: Reached target basic.target - Basic System. Jun 20 19:49:12.337901 systemd[1656]: Reached target default.target - Main User Target. Jun 20 19:49:12.338008 systemd[1656]: Startup finished in 166ms. Jun 20 19:49:12.338239 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:49:12.343367 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:49:12.815919 systemd[1]: Started sshd@1-172.24.4.123:22-172.24.4.1:35300.service - OpenSSH per-connection server daemon (172.24.4.1:35300). Jun 20 19:49:13.080235 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:49:13.435237 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:49:14.211403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:49:14.225047 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:49:14.229200 sshd[1667]: Accepted publickey for core from 172.24.4.1 port 35300 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:49:14.232117 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:49:14.246422 systemd-logind[1537]: New session 2 of user core. Jun 20 19:49:14.254592 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:49:14.880230 sshd[1677]: Connection closed by 172.24.4.1 port 35300 Jun 20 19:49:14.881088 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Jun 20 19:49:14.895721 systemd[1]: sshd@1-172.24.4.123:22-172.24.4.1:35300.service: Deactivated successfully. Jun 20 19:49:14.899370 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:49:14.901303 systemd-logind[1537]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:49:14.907897 systemd[1]: Started sshd@2-172.24.4.123:22-172.24.4.1:43912.service - OpenSSH per-connection server daemon (172.24.4.1:43912). Jun 20 19:49:14.909858 systemd-logind[1537]: Removed session 2. Jun 20 19:49:15.682456 kubelet[1676]: E0620 19:49:15.682284 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:49:15.687089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:49:15.687482 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:49:15.688121 systemd[1]: kubelet.service: Consumed 2.160s CPU time, 267.4M memory peak. Jun 20 19:49:15.993302 login[1634]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:49:16.005272 systemd-logind[1537]: New session 3 of user core. Jun 20 19:49:16.012567 login[1635]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:49:16.014595 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:49:16.036081 systemd-logind[1537]: New session 4 of user core. Jun 20 19:49:16.041348 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:49:16.322032 sshd[1687]: Accepted publickey for core from 172.24.4.1 port 43912 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:49:16.324772 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:49:16.335080 systemd-logind[1537]: New session 5 of user core. Jun 20 19:49:16.344665 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:49:16.937035 sshd[1716]: Connection closed by 172.24.4.1 port 43912 Jun 20 19:49:16.937995 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Jun 20 19:49:16.944288 systemd[1]: sshd@2-172.24.4.123:22-172.24.4.1:43912.service: Deactivated successfully. Jun 20 19:49:16.949507 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:49:16.953389 systemd-logind[1537]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:49:16.956534 systemd-logind[1537]: Removed session 5. Jun 20 19:49:17.100241 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:49:17.114732 coreos-metadata[1522]: Jun 20 19:49:17.114 WARN failed to locate config-drive, using the metadata service API instead Jun 20 19:49:17.163452 coreos-metadata[1522]: Jun 20 19:49:17.163 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jun 20 19:49:17.355706 coreos-metadata[1522]: Jun 20 19:49:17.355 INFO Fetch successful Jun 20 19:49:17.355706 coreos-metadata[1522]: Jun 20 19:49:17.355 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 20 19:49:17.369912 coreos-metadata[1522]: Jun 20 19:49:17.369 INFO Fetch successful Jun 20 19:49:17.369912 coreos-metadata[1522]: Jun 20 19:49:17.369 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jun 20 19:49:17.386380 coreos-metadata[1522]: Jun 20 19:49:17.386 INFO Fetch successful Jun 20 19:49:17.386380 coreos-metadata[1522]: Jun 20 19:49:17.386 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jun 20 19:49:17.400126 coreos-metadata[1522]: Jun 20 19:49:17.400 INFO Fetch successful Jun 20 19:49:17.400126 coreos-metadata[1522]: Jun 20 19:49:17.400 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jun 20 19:49:17.413450 coreos-metadata[1522]: Jun 20 19:49:17.413 INFO Fetch successful Jun 20 19:49:17.413450 coreos-metadata[1522]: Jun 20 19:49:17.413 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jun 20 19:49:17.427369 coreos-metadata[1522]: Jun 20 19:49:17.427 INFO Fetch successful Jun 20 19:49:17.463251 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:49:17.475804 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:49:17.478230 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:49:17.490578 coreos-metadata[1593]: Jun 20 19:49:17.490 WARN failed to locate config-drive, using the metadata service API instead Jun 20 19:49:17.532878 coreos-metadata[1593]: Jun 20 19:49:17.532 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jun 20 19:49:17.548756 coreos-metadata[1593]: Jun 20 19:49:17.548 INFO Fetch successful Jun 20 19:49:17.548866 coreos-metadata[1593]: Jun 20 19:49:17.548 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 20 19:49:17.563439 coreos-metadata[1593]: Jun 20 19:49:17.563 INFO Fetch successful Jun 20 19:49:17.570336 unknown[1593]: wrote ssh authorized keys file for user: core Jun 20 19:49:17.758750 update-ssh-keys[1731]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:49:17.761433 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:49:17.765538 systemd[1]: Finished sshkeys.service. Jun 20 19:49:17.770555 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:49:17.770889 systemd[1]: Startup finished in 3.698s (kernel) + 15.569s (initrd) + 11.526s (userspace) = 30.794s. Jun 20 19:49:25.800221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:49:25.805850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:49:26.249817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:49:26.265211 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:49:26.391946 kubelet[1742]: E0620 19:49:26.391805 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:49:26.404224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:49:26.404519 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:49:26.405350 systemd[1]: kubelet.service: Consumed 425ms CPU time, 108.5M memory peak. Jun 20 19:49:26.988563 systemd[1]: Started sshd@3-172.24.4.123:22-172.24.4.1:36508.service - OpenSSH per-connection server daemon (172.24.4.1:36508). Jun 20 19:49:28.125966 sshd[1750]: Accepted publickey for core from 172.24.4.1 port 36508 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:49:28.130472 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:49:28.143224 systemd-logind[1537]: New session 6 of user core. Jun 20 19:49:28.156521 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:49:28.857275 sshd[1752]: Connection closed by 172.24.4.1 port 36508 Jun 20 19:49:28.858143 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jun 20 19:49:28.889157 systemd[1]: sshd@3-172.24.4.123:22-172.24.4.1:36508.service: Deactivated successfully. Jun 20 19:49:28.897392 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:49:28.902117 systemd-logind[1537]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:49:28.910964 systemd[1]: Started sshd@4-172.24.4.123:22-172.24.4.1:36522.service - OpenSSH per-connection server daemon (172.24.4.1:36522). Jun 20 19:49:28.915731 systemd-logind[1537]: Removed session 6. Jun 20 19:49:30.025552 sshd[1758]: Accepted publickey for core from 172.24.4.1 port 36522 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:49:30.027756 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:49:30.038667 systemd-logind[1537]: New session 7 of user core. Jun 20 19:49:30.045626 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:49:30.617599 sshd[1760]: Connection closed by 172.24.4.1 port 36522 Jun 20 19:49:30.620127 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jun 20 19:49:30.643140 systemd[1]: sshd@4-172.24.4.123:22-172.24.4.1:36522.service: Deactivated successfully. Jun 20 19:49:30.650018 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:49:30.655054 systemd-logind[1537]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:49:30.663283 systemd[1]: Started sshd@5-172.24.4.123:22-172.24.4.1:36524.service - OpenSSH per-connection server daemon (172.24.4.1:36524). Jun 20 19:49:30.666010 systemd-logind[1537]: Removed session 7. Jun 20 19:49:32.089623 sshd[1766]: Accepted publickey for core from 172.24.4.1 port 36524 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:49:32.092982 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:49:32.115462 systemd-logind[1537]: New session 8 of user core. Jun 20 19:49:32.127504 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:49:32.821251 sshd[1768]: Connection closed by 172.24.4.1 port 36524 Jun 20 19:49:32.823118 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jun 20 19:49:32.838786 systemd[1]: sshd@5-172.24.4.123:22-172.24.4.1:36524.service: Deactivated successfully. Jun 20 19:49:32.842421 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:49:32.845102 systemd-logind[1537]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:49:32.857058 systemd[1]: Started sshd@6-172.24.4.123:22-172.24.4.1:36532.service - OpenSSH per-connection server daemon (172.24.4.1:36532). Jun 20 19:49:32.864601 systemd-logind[1537]: Removed session 8. Jun 20 19:49:34.019778 sshd[1774]: Accepted publickey for core from 172.24.4.1 port 36532 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:49:34.023109 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:49:34.036550 systemd-logind[1537]: New session 9 of user core. Jun 20 19:49:34.046555 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:49:34.485727 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:49:34.486992 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:49:34.512146 sudo[1777]: pam_unix(sudo:session): session closed for user root Jun 20 19:49:34.668217 sshd[1776]: Connection closed by 172.24.4.1 port 36532 Jun 20 19:49:34.669165 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jun 20 19:49:34.687124 systemd[1]: sshd@6-172.24.4.123:22-172.24.4.1:36532.service: Deactivated successfully. Jun 20 19:49:34.691219 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:49:34.693335 systemd-logind[1537]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:49:34.699695 systemd[1]: Started sshd@7-172.24.4.123:22-172.24.4.1:37926.service - OpenSSH per-connection server daemon (172.24.4.1:37926). Jun 20 19:49:34.703298 systemd-logind[1537]: Removed session 9. Jun 20 19:49:36.052013 sshd[1783]: Accepted publickey for core from 172.24.4.1 port 37926 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:49:36.055060 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:49:36.069285 systemd-logind[1537]: New session 10 of user core. Jun 20 19:49:36.076477 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:49:36.524754 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:49:36.525478 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:49:36.528118 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:49:36.534671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:49:36.567094 sudo[1787]: pam_unix(sudo:session): session closed for user root Jun 20 19:49:36.582003 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:49:36.583485 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:49:36.610242 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:49:36.702504 augenrules[1812]: No rules Jun 20 19:49:36.705885 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:49:36.706493 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:49:36.709066 sudo[1786]: pam_unix(sudo:session): session closed for user root Jun 20 19:49:36.919896 sshd[1785]: Connection closed by 172.24.4.1 port 37926 Jun 20 19:49:36.920380 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Jun 20 19:49:36.944052 systemd[1]: sshd@7-172.24.4.123:22-172.24.4.1:37926.service: Deactivated successfully. Jun 20 19:49:36.948714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:49:36.950052 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:49:36.951601 systemd-logind[1537]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:49:36.959716 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:49:36.963436 systemd[1]: Started sshd@8-172.24.4.123:22-172.24.4.1:37936.service - OpenSSH per-connection server daemon (172.24.4.1:37936). Jun 20 19:49:36.965336 systemd-logind[1537]: Removed session 10. Jun 20 19:49:37.095193 kubelet[1823]: E0620 19:49:37.095127 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:49:37.100830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:49:37.101242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:49:37.102000 systemd[1]: kubelet.service: Consumed 376ms CPU time, 110.6M memory peak. Jun 20 19:49:38.289141 sshd[1827]: Accepted publickey for core from 172.24.4.1 port 37936 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:49:38.292105 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:49:38.305286 systemd-logind[1537]: New session 11 of user core. Jun 20 19:49:38.313541 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:49:38.705419 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:49:38.707262 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:49:40.152721 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:49:40.168816 (dockerd)[1854]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:49:40.661779 dockerd[1854]: time="2025-06-20T19:49:40.661710503Z" level=info msg="Starting up" Jun 20 19:49:40.663471 dockerd[1854]: time="2025-06-20T19:49:40.663415025Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:49:40.773755 dockerd[1854]: time="2025-06-20T19:49:40.773692994Z" level=info msg="Loading containers: start." Jun 20 19:49:40.804254 kernel: Initializing XFRM netlink socket Jun 20 19:49:41.684109 systemd-networkd[1443]: docker0: Link UP Jun 20 19:49:41.689325 dockerd[1854]: time="2025-06-20T19:49:41.689251321Z" level=info msg="Loading containers: done." Jun 20 19:49:41.706864 dockerd[1854]: time="2025-06-20T19:49:41.706471281Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:49:41.706864 dockerd[1854]: time="2025-06-20T19:49:41.706567616Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:49:41.706864 dockerd[1854]: time="2025-06-20T19:49:41.706682408Z" level=info msg="Initializing buildkit" Jun 20 19:49:41.751331 dockerd[1854]: time="2025-06-20T19:49:41.751295288Z" level=info msg="Completed buildkit initialization" Jun 20 19:49:41.759664 dockerd[1854]: time="2025-06-20T19:49:41.758397951Z" level=info msg="Daemon has completed initialization" Jun 20 19:49:41.759664 dockerd[1854]: time="2025-06-20T19:49:41.758538365Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:49:41.759312 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:49:43.215519 containerd[1551]: time="2025-06-20T19:49:43.215281601Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 19:49:44.163964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2511038795.mount: Deactivated successfully. Jun 20 19:49:46.280078 containerd[1551]: time="2025-06-20T19:49:46.279313363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:46.316647 containerd[1551]: time="2025-06-20T19:49:46.316419760Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079107" Jun 20 19:49:46.349673 containerd[1551]: time="2025-06-20T19:49:46.349482843Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:46.386800 containerd[1551]: time="2025-06-20T19:49:46.386572557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:46.392225 containerd[1551]: time="2025-06-20T19:49:46.391248509Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 3.175708602s" Jun 20 19:49:46.392225 containerd[1551]: time="2025-06-20T19:49:46.391401165Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jun 20 19:49:46.397017 containerd[1551]: time="2025-06-20T19:49:46.396905315Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 19:49:47.300246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:49:47.309847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:49:48.029560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:49:48.053887 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:49:48.146333 kubelet[2119]: E0620 19:49:48.146243 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:49:48.149754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:49:48.149916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:49:48.150509 systemd[1]: kubelet.service: Consumed 404ms CPU time, 108.4M memory peak. Jun 20 19:49:49.757022 containerd[1551]: time="2025-06-20T19:49:49.756553591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:49.758256 containerd[1551]: time="2025-06-20T19:49:49.758219310Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018954" Jun 20 19:49:49.760525 containerd[1551]: time="2025-06-20T19:49:49.760488724Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:49.764992 containerd[1551]: time="2025-06-20T19:49:49.764941677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:49.768759 containerd[1551]: time="2025-06-20T19:49:49.768046105Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 3.370977994s" Jun 20 19:49:49.768759 containerd[1551]: time="2025-06-20T19:49:49.768301789Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jun 20 19:49:49.771242 containerd[1551]: time="2025-06-20T19:49:49.770667189Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 19:49:51.541610 containerd[1551]: time="2025-06-20T19:49:51.541502735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:51.543775 containerd[1551]: time="2025-06-20T19:49:51.543722451Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155063" Jun 20 19:49:51.545605 containerd[1551]: time="2025-06-20T19:49:51.545490567Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:51.550591 containerd[1551]: time="2025-06-20T19:49:51.550477973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:51.552672 containerd[1551]: time="2025-06-20T19:49:51.552622454Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.781887514s" Jun 20 19:49:51.553056 containerd[1551]: time="2025-06-20T19:49:51.552872616Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jun 20 19:49:51.555095 containerd[1551]: time="2025-06-20T19:49:51.555042636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 19:49:53.065962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2455147213.mount: Deactivated successfully. Jun 20 19:49:53.935055 containerd[1551]: time="2025-06-20T19:49:53.934931111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:53.936809 containerd[1551]: time="2025-06-20T19:49:53.936707073Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892754" Jun 20 19:49:53.938892 containerd[1551]: time="2025-06-20T19:49:53.938709631Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:53.943691 containerd[1551]: time="2025-06-20T19:49:53.943533988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:53.946217 containerd[1551]: time="2025-06-20T19:49:53.945508381Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.390243999s" Jun 20 19:49:53.946217 containerd[1551]: time="2025-06-20T19:49:53.945601531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jun 20 19:49:53.946600 containerd[1551]: time="2025-06-20T19:49:53.946412078Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 19:49:54.726937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219853773.mount: Deactivated successfully. Jun 20 19:49:55.988307 update_engine[1538]: I20250620 19:49:55.986344 1538 update_attempter.cc:509] Updating boot flags... Jun 20 19:49:56.332652 containerd[1551]: time="2025-06-20T19:49:56.332479788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:56.334289 containerd[1551]: time="2025-06-20T19:49:56.334214971Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jun 20 19:49:56.336156 containerd[1551]: time="2025-06-20T19:49:56.336048662Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:56.341545 containerd[1551]: time="2025-06-20T19:49:56.341459872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:49:56.343512 containerd[1551]: time="2025-06-20T19:49:56.343389556Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.396871745s" Jun 20 19:49:56.343656 containerd[1551]: time="2025-06-20T19:49:56.343632632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jun 20 19:49:56.347203 containerd[1551]: time="2025-06-20T19:49:56.346791129Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:49:56.966786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1312744058.mount: Deactivated successfully. Jun 20 19:49:56.980422 containerd[1551]: time="2025-06-20T19:49:56.980322591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:49:56.984472 containerd[1551]: time="2025-06-20T19:49:56.984391641Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 19:49:56.986243 containerd[1551]: time="2025-06-20T19:49:56.986082889Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:49:56.991460 containerd[1551]: time="2025-06-20T19:49:56.991286402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:49:56.993389 containerd[1551]: time="2025-06-20T19:49:56.993008419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 646.170991ms" Jun 20 19:49:56.993389 containerd[1551]: time="2025-06-20T19:49:56.993084395Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:49:56.994365 containerd[1551]: time="2025-06-20T19:49:56.994303749Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 19:49:57.646637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1243374508.mount: Deactivated successfully. Jun 20 19:49:58.298667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:49:58.305389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:49:58.925131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:49:58.935476 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:49:59.228381 kubelet[2239]: E0620 19:49:59.227266 2239 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:49:59.230308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:49:59.230677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:49:59.232535 systemd[1]: kubelet.service: Consumed 420ms CPU time, 108.2M memory peak. Jun 20 19:50:01.894434 containerd[1551]: time="2025-06-20T19:50:01.893660151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:01.897978 containerd[1551]: time="2025-06-20T19:50:01.895527371Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247183" Jun 20 19:50:01.899214 containerd[1551]: time="2025-06-20T19:50:01.899148293Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:01.904640 containerd[1551]: time="2025-06-20T19:50:01.904548157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:01.906567 containerd[1551]: time="2025-06-20T19:50:01.906024300Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.911634678s" Jun 20 19:50:01.906567 containerd[1551]: time="2025-06-20T19:50:01.906160961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jun 20 19:50:06.323852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:50:06.326723 systemd[1]: kubelet.service: Consumed 420ms CPU time, 108.2M memory peak. Jun 20 19:50:06.334329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:50:06.364486 systemd[1]: Reload requested from client PID 2313 ('systemctl') (unit session-11.scope)... Jun 20 19:50:06.364731 systemd[1]: Reloading... Jun 20 19:50:06.513215 zram_generator::config[2358]: No configuration found. Jun 20 19:50:06.732949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:50:06.879555 systemd[1]: Reloading finished in 514 ms. Jun 20 19:50:06.993378 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:50:06.993562 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:50:06.994546 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:50:06.994640 systemd[1]: kubelet.service: Consumed 215ms CPU time, 98.2M memory peak. Jun 20 19:50:06.999068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:50:07.235596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:50:07.254849 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:50:07.344799 kubelet[2425]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:50:07.344799 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:50:07.344799 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:50:07.344799 kubelet[2425]: I0620 19:50:07.344837 2425 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:50:08.963269 kubelet[2425]: I0620 19:50:08.962557 2425 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:50:08.963269 kubelet[2425]: I0620 19:50:08.962730 2425 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:50:08.964674 kubelet[2425]: I0620 19:50:08.964564 2425 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:50:09.019409 kubelet[2425]: I0620 19:50:09.019362 2425 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:50:09.020435 kubelet[2425]: E0620 19:50:09.020379 2425 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.24.4.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 19:50:09.035270 kubelet[2425]: I0620 19:50:09.035206 2425 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:50:09.042753 kubelet[2425]: I0620 19:50:09.042703 2425 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:50:09.043552 kubelet[2425]: I0620 19:50:09.043468 2425 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:50:09.044038 kubelet[2425]: I0620 19:50:09.043553 2425 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-1-0-0-4524070979.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:50:09.044301 kubelet[2425]: I0620 19:50:09.044101 2425 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:50:09.044301 kubelet[2425]: I0620 19:50:09.044137 2425 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:50:09.045161 kubelet[2425]: I0620 19:50:09.045122 2425 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:50:09.059812 kubelet[2425]: I0620 19:50:09.059161 2425 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:50:09.059812 kubelet[2425]: I0620 19:50:09.059280 2425 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:50:09.059812 kubelet[2425]: I0620 19:50:09.059426 2425 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:50:09.059812 kubelet[2425]: I0620 19:50:09.059491 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:50:09.064288 kubelet[2425]: E0620 19:50:09.064249 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.24.4.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-0-0-4524070979.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:50:09.072813 kubelet[2425]: E0620 19:50:09.072681 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.24.4.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:50:09.076426 kubelet[2425]: I0620 19:50:09.076385 2425 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:50:09.077862 kubelet[2425]: I0620 19:50:09.077824 2425 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:50:09.079718 kubelet[2425]: W0620 19:50:09.079695 2425 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:50:09.091211 kubelet[2425]: I0620 19:50:09.091187 2425 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:50:09.091430 kubelet[2425]: I0620 19:50:09.091416 2425 server.go:1289] "Started kubelet" Jun 20 19:50:09.095324 kubelet[2425]: I0620 19:50:09.095055 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:50:09.103136 kubelet[2425]: I0620 19:50:09.102461 2425 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:50:09.103805 kubelet[2425]: E0620 19:50:09.101517 2425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.123:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.123:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344-1-0-0-4524070979.novalocal.184ad81978536869 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-1-0-0-4524070979.novalocal,UID:ci-4344-1-0-0-4524070979.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-1-0-0-4524070979.novalocal,},FirstTimestamp:2025-06-20 19:50:09.091373161 +0000 UTC m=+1.811696269,LastTimestamp:2025-06-20 19:50:09.091373161 +0000 UTC m=+1.811696269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-1-0-0-4524070979.novalocal,}" Jun 20 19:50:09.105185 kubelet[2425]: I0620 19:50:09.105128 2425 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:50:09.106793 kubelet[2425]: I0620 19:50:09.106727 2425 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:50:09.108069 kubelet[2425]: I0620 19:50:09.108035 2425 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:50:09.108260 kubelet[2425]: I0620 19:50:09.108224 2425 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:50:09.109189 kubelet[2425]: E0620 19:50:09.108846 2425 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-0-0-4524070979.novalocal\" not found" Jun 20 19:50:09.109670 kubelet[2425]: I0620 19:50:09.109657 2425 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:50:09.109878 kubelet[2425]: I0620 19:50:09.109866 2425 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:50:09.111103 kubelet[2425]: I0620 19:50:09.110984 2425 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:50:09.111945 kubelet[2425]: E0620 19:50:09.111830 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.24.4.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:50:09.112506 kubelet[2425]: E0620 19:50:09.112411 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-0-4524070979.novalocal?timeout=10s\": dial tcp 172.24.4.123:6443: connect: connection refused" interval="200ms" Jun 20 19:50:09.115041 kubelet[2425]: I0620 19:50:09.115010 2425 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:50:09.115355 kubelet[2425]: I0620 19:50:09.115307 2425 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:50:09.118460 kubelet[2425]: E0620 19:50:09.118389 2425 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:50:09.121361 kubelet[2425]: I0620 19:50:09.121299 2425 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:50:09.146348 kubelet[2425]: I0620 19:50:09.146293 2425 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:50:09.149074 kubelet[2425]: I0620 19:50:09.148743 2425 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:50:09.149074 kubelet[2425]: I0620 19:50:09.148761 2425 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:50:09.149074 kubelet[2425]: I0620 19:50:09.148784 2425 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:50:09.149472 kubelet[2425]: I0620 19:50:09.149456 2425 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:50:09.149593 kubelet[2425]: I0620 19:50:09.149581 2425 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:50:09.149697 kubelet[2425]: I0620 19:50:09.149686 2425 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:50:09.149812 kubelet[2425]: I0620 19:50:09.149800 2425 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:50:09.150051 kubelet[2425]: E0620 19:50:09.149933 2425 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:50:09.157336 kubelet[2425]: I0620 19:50:09.157310 2425 policy_none.go:49] "None policy: Start" Jun 20 19:50:09.157635 kubelet[2425]: I0620 19:50:09.157622 2425 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:50:09.158300 kubelet[2425]: I0620 19:50:09.158241 2425 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:50:09.159255 kubelet[2425]: E0620 19:50:09.157461 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.24.4.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:50:09.168882 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:50:09.190806 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:50:09.197864 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:50:09.209520 kubelet[2425]: E0620 19:50:09.209498 2425 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-0-0-4524070979.novalocal\" not found" Jun 20 19:50:09.216277 kubelet[2425]: E0620 19:50:09.214612 2425 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:50:09.216277 kubelet[2425]: I0620 19:50:09.214805 2425 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:50:09.216277 kubelet[2425]: I0620 19:50:09.214826 2425 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:50:09.219735 kubelet[2425]: I0620 19:50:09.219666 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:50:09.220772 kubelet[2425]: E0620 19:50:09.220753 2425 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:50:09.221116 kubelet[2425]: E0620 19:50:09.221088 2425 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344-1-0-0-4524070979.novalocal\" not found" Jun 20 19:50:09.283439 systemd[1]: Created slice kubepods-burstable-podb9968fa17baff220b3559756fd553a6e.slice - libcontainer container kubepods-burstable-podb9968fa17baff220b3559756fd553a6e.slice. Jun 20 19:50:09.311457 kubelet[2425]: I0620 19:50:09.311103 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9968fa17baff220b3559756fd553a6e-k8s-certs\") pod \"kube-apiserver-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"b9968fa17baff220b3559756fd553a6e\") " pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.311457 kubelet[2425]: I0620 19:50:09.311300 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b7f3a0768780450411a7965d8d4587b-ca-certs\") pod \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"9b7f3a0768780450411a7965d8d4587b\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.312199 kubelet[2425]: I0620 19:50:09.311879 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b7f3a0768780450411a7965d8d4587b-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"9b7f3a0768780450411a7965d8d4587b\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.312199 kubelet[2425]: I0620 19:50:09.311960 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b7f3a0768780450411a7965d8d4587b-kubeconfig\") pod \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"9b7f3a0768780450411a7965d8d4587b\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.312199 kubelet[2425]: I0620 19:50:09.312010 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3ebf22a1115171285fb45d1f95992d4-kubeconfig\") pod \"kube-scheduler-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"e3ebf22a1115171285fb45d1f95992d4\") " pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.312199 kubelet[2425]: I0620 19:50:09.312077 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9968fa17baff220b3559756fd553a6e-ca-certs\") pod \"kube-apiserver-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"b9968fa17baff220b3559756fd553a6e\") " pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.314352 kubelet[2425]: I0620 19:50:09.312130 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9968fa17baff220b3559756fd553a6e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"b9968fa17baff220b3559756fd553a6e\") " pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.314352 kubelet[2425]: E0620 19:50:09.313916 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-0-4524070979.novalocal?timeout=10s\": dial tcp 172.24.4.123:6443: connect: connection refused" interval="400ms" Jun 20 19:50:09.314582 kubelet[2425]: I0620 19:50:09.314509 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b7f3a0768780450411a7965d8d4587b-k8s-certs\") pod \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"9b7f3a0768780450411a7965d8d4587b\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.314742 kubelet[2425]: I0620 19:50:09.314611 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b7f3a0768780450411a7965d8d4587b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"9b7f3a0768780450411a7965d8d4587b\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.317618 kubelet[2425]: E0620 19:50:09.317565 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-0-4524070979.novalocal\" not found" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.321380 kubelet[2425]: I0620 19:50:09.321253 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.322155 kubelet[2425]: E0620 19:50:09.322098 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.123:6443/api/v1/nodes\": dial tcp 172.24.4.123:6443: connect: connection refused" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.327082 systemd[1]: Created slice kubepods-burstable-pod9b7f3a0768780450411a7965d8d4587b.slice - libcontainer container kubepods-burstable-pod9b7f3a0768780450411a7965d8d4587b.slice. Jun 20 19:50:09.350540 kubelet[2425]: E0620 19:50:09.350469 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-0-4524070979.novalocal\" not found" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.355766 systemd[1]: Created slice kubepods-burstable-pode3ebf22a1115171285fb45d1f95992d4.slice - libcontainer container kubepods-burstable-pode3ebf22a1115171285fb45d1f95992d4.slice. Jun 20 19:50:09.361779 kubelet[2425]: E0620 19:50:09.361751 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-0-4524070979.novalocal\" not found" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.526572 kubelet[2425]: I0620 19:50:09.526376 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.528944 kubelet[2425]: E0620 19:50:09.528865 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.123:6443/api/v1/nodes\": dial tcp 172.24.4.123:6443: connect: connection refused" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.622328 containerd[1551]: time="2025-06-20T19:50:09.622039799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-1-0-0-4524070979.novalocal,Uid:b9968fa17baff220b3559756fd553a6e,Namespace:kube-system,Attempt:0,}" Jun 20 19:50:09.654403 containerd[1551]: time="2025-06-20T19:50:09.654237500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal,Uid:9b7f3a0768780450411a7965d8d4587b,Namespace:kube-system,Attempt:0,}" Jun 20 19:50:09.663094 containerd[1551]: time="2025-06-20T19:50:09.662980710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-1-0-0-4524070979.novalocal,Uid:e3ebf22a1115171285fb45d1f95992d4,Namespace:kube-system,Attempt:0,}" Jun 20 19:50:09.715323 kubelet[2425]: E0620 19:50:09.715137 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-0-4524070979.novalocal?timeout=10s\": dial tcp 172.24.4.123:6443: connect: connection refused" interval="800ms" Jun 20 19:50:09.932450 kubelet[2425]: I0620 19:50:09.932368 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:09.933255 kubelet[2425]: E0620 19:50:09.933146 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.123:6443/api/v1/nodes\": dial tcp 172.24.4.123:6443: connect: connection refused" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:10.142128 kubelet[2425]: E0620 19:50:10.141973 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.24.4.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:50:10.174097 kubelet[2425]: E0620 19:50:10.173975 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.24.4.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-0-0-4524070979.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:50:10.371607 kubelet[2425]: E0620 19:50:10.371427 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.24.4.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:50:10.408004 kubelet[2425]: E0620 19:50:10.407950 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.24.4.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:50:10.420415 containerd[1551]: time="2025-06-20T19:50:10.420321640Z" level=info msg="connecting to shim 73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc" address="unix:///run/containerd/s/c8265e16fe113449f820f238250199cf923dac65d531d83c42add05f31d2a4d6" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:50:10.421503 containerd[1551]: time="2025-06-20T19:50:10.421437696Z" level=info msg="connecting to shim 21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9" address="unix:///run/containerd/s/422fa2b4b6dcaa4fc0b1e4021a71b74504aa2694ec8da4e6566b1eab89e29a3a" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:50:10.425372 containerd[1551]: time="2025-06-20T19:50:10.425298469Z" level=info msg="connecting to shim 88b502aa141e4978eea3e1cbe5bc8a95c19fea34b134ba40c16ad2b24e26df67" address="unix:///run/containerd/s/c7f7a8fc6ed82726664e9208e2b559cd456d6e679f01c2068bec826b79e32dc0" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:50:10.479488 systemd[1]: Started cri-containerd-73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc.scope - libcontainer container 73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc. Jun 20 19:50:10.494670 systemd[1]: Started cri-containerd-21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9.scope - libcontainer container 21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9. Jun 20 19:50:10.508425 systemd[1]: Started cri-containerd-88b502aa141e4978eea3e1cbe5bc8a95c19fea34b134ba40c16ad2b24e26df67.scope - libcontainer container 88b502aa141e4978eea3e1cbe5bc8a95c19fea34b134ba40c16ad2b24e26df67. Jun 20 19:50:10.516901 kubelet[2425]: E0620 19:50:10.516752 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-0-4524070979.novalocal?timeout=10s\": dial tcp 172.24.4.123:6443: connect: connection refused" interval="1.6s" Jun 20 19:50:10.737391 kubelet[2425]: I0620 19:50:10.737224 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:10.740215 kubelet[2425]: E0620 19:50:10.740084 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.123:6443/api/v1/nodes\": dial tcp 172.24.4.123:6443: connect: connection refused" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:11.080754 containerd[1551]: time="2025-06-20T19:50:11.080436574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-1-0-0-4524070979.novalocal,Uid:e3ebf22a1115171285fb45d1f95992d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9\"" Jun 20 19:50:11.108789 containerd[1551]: time="2025-06-20T19:50:11.108625896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal,Uid:9b7f3a0768780450411a7965d8d4587b,Namespace:kube-system,Attempt:0,} returns sandbox id \"73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc\"" Jun 20 19:50:11.135487 kubelet[2425]: E0620 19:50:11.129160 2425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.123:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.123:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344-1-0-0-4524070979.novalocal.184ad81978536869 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-1-0-0-4524070979.novalocal,UID:ci-4344-1-0-0-4524070979.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-1-0-0-4524070979.novalocal,},FirstTimestamp:2025-06-20 19:50:09.091373161 +0000 UTC m=+1.811696269,LastTimestamp:2025-06-20 19:50:09.091373161 +0000 UTC m=+1.811696269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-1-0-0-4524070979.novalocal,}" Jun 20 19:50:11.176597 kubelet[2425]: E0620 19:50:11.176462 2425 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.24.4.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 19:50:11.257996 containerd[1551]: time="2025-06-20T19:50:11.257725901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-1-0-0-4524070979.novalocal,Uid:b9968fa17baff220b3559756fd553a6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"88b502aa141e4978eea3e1cbe5bc8a95c19fea34b134ba40c16ad2b24e26df67\"" Jun 20 19:50:11.260243 containerd[1551]: time="2025-06-20T19:50:11.259058438Z" level=info msg="CreateContainer within sandbox \"21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:50:11.418763 containerd[1551]: time="2025-06-20T19:50:11.418388678Z" level=info msg="CreateContainer within sandbox \"73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:50:11.456929 containerd[1551]: time="2025-06-20T19:50:11.456723453Z" level=info msg="CreateContainer within sandbox \"88b502aa141e4978eea3e1cbe5bc8a95c19fea34b134ba40c16ad2b24e26df67\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:50:11.832515 containerd[1551]: time="2025-06-20T19:50:11.832419591Z" level=info msg="Container a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:50:11.845940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350472338.mount: Deactivated successfully. Jun 20 19:50:11.989001 containerd[1551]: time="2025-06-20T19:50:11.988884278Z" level=info msg="Container cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:50:11.991913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035932936.mount: Deactivated successfully. Jun 20 19:50:12.006645 containerd[1551]: time="2025-06-20T19:50:12.006563360Z" level=info msg="Container 8bba20d58697886c1d10e145c2032ccb1622d05807707c4b6f4104609c2e1cd6: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:50:12.022685 containerd[1551]: time="2025-06-20T19:50:12.022509600Z" level=info msg="CreateContainer within sandbox \"21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5\"" Jun 20 19:50:12.026140 containerd[1551]: time="2025-06-20T19:50:12.026030693Z" level=info msg="StartContainer for \"a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5\"" Jun 20 19:50:12.028030 containerd[1551]: time="2025-06-20T19:50:12.027957975Z" level=info msg="CreateContainer within sandbox \"73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475\"" Jun 20 19:50:12.030871 containerd[1551]: time="2025-06-20T19:50:12.030785334Z" level=info msg="StartContainer for \"cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475\"" Jun 20 19:50:12.032781 containerd[1551]: time="2025-06-20T19:50:12.032732854Z" level=info msg="connecting to shim a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5" address="unix:///run/containerd/s/422fa2b4b6dcaa4fc0b1e4021a71b74504aa2694ec8da4e6566b1eab89e29a3a" protocol=ttrpc version=3 Jun 20 19:50:12.033413 containerd[1551]: time="2025-06-20T19:50:12.033348180Z" level=info msg="connecting to shim cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475" address="unix:///run/containerd/s/c8265e16fe113449f820f238250199cf923dac65d531d83c42add05f31d2a4d6" protocol=ttrpc version=3 Jun 20 19:50:12.036672 containerd[1551]: time="2025-06-20T19:50:12.036590275Z" level=info msg="CreateContainer within sandbox \"88b502aa141e4978eea3e1cbe5bc8a95c19fea34b134ba40c16ad2b24e26df67\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8bba20d58697886c1d10e145c2032ccb1622d05807707c4b6f4104609c2e1cd6\"" Jun 20 19:50:12.039001 containerd[1551]: time="2025-06-20T19:50:12.038960807Z" level=info msg="StartContainer for \"8bba20d58697886c1d10e145c2032ccb1622d05807707c4b6f4104609c2e1cd6\"" Jun 20 19:50:12.046845 containerd[1551]: time="2025-06-20T19:50:12.046783602Z" level=info msg="connecting to shim 8bba20d58697886c1d10e145c2032ccb1622d05807707c4b6f4104609c2e1cd6" address="unix:///run/containerd/s/c7f7a8fc6ed82726664e9208e2b559cd456d6e679f01c2068bec826b79e32dc0" protocol=ttrpc version=3 Jun 20 19:50:12.074354 systemd[1]: Started cri-containerd-cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475.scope - libcontainer container cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475. Jun 20 19:50:12.082341 systemd[1]: Started cri-containerd-8bba20d58697886c1d10e145c2032ccb1622d05807707c4b6f4104609c2e1cd6.scope - libcontainer container 8bba20d58697886c1d10e145c2032ccb1622d05807707c4b6f4104609c2e1cd6. Jun 20 19:50:12.093502 systemd[1]: Started cri-containerd-a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5.scope - libcontainer container a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5. Jun 20 19:50:12.119677 kubelet[2425]: E0620 19:50:12.119590 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-0-4524070979.novalocal?timeout=10s\": dial tcp 172.24.4.123:6443: connect: connection refused" interval="3.2s" Jun 20 19:50:12.192274 containerd[1551]: time="2025-06-20T19:50:12.192210068Z" level=info msg="StartContainer for \"cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475\" returns successfully" Jun 20 19:50:12.212793 containerd[1551]: time="2025-06-20T19:50:12.212740545Z" level=info msg="StartContainer for \"8bba20d58697886c1d10e145c2032ccb1622d05807707c4b6f4104609c2e1cd6\" returns successfully" Jun 20 19:50:12.225697 containerd[1551]: time="2025-06-20T19:50:12.225644860Z" level=info msg="StartContainer for \"a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5\" returns successfully" Jun 20 19:50:12.344810 kubelet[2425]: I0620 19:50:12.344327 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:12.346578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount875200332.mount: Deactivated successfully. Jun 20 19:50:13.211201 kubelet[2425]: E0620 19:50:13.210806 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-0-4524070979.novalocal\" not found" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:13.211680 kubelet[2425]: E0620 19:50:13.211634 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-0-4524070979.novalocal\" not found" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:13.212256 kubelet[2425]: E0620 19:50:13.212042 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-0-4524070979.novalocal\" not found" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:14.213470 kubelet[2425]: E0620 19:50:14.213018 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-0-4524070979.novalocal\" not found" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:14.213470 kubelet[2425]: E0620 19:50:14.213223 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-0-4524070979.novalocal\" not found" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.011721 kubelet[2425]: I0620 19:50:15.011615 2425 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.011721 kubelet[2425]: E0620 19:50:15.011717 2425 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344-1-0-0-4524070979.novalocal\": node \"ci-4344-1-0-0-4524070979.novalocal\" not found" Jun 20 19:50:15.013398 kubelet[2425]: I0620 19:50:15.013365 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.042690 kubelet[2425]: E0620 19:50:15.042587 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-0-0-4524070979.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.042690 kubelet[2425]: I0620 19:50:15.042675 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.048366 kubelet[2425]: E0620 19:50:15.048277 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-0-0-4524070979.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.048366 kubelet[2425]: I0620 19:50:15.048322 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.060213 kubelet[2425]: E0620 19:50:15.060150 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.067839 kubelet[2425]: I0620 19:50:15.067794 2425 apiserver.go:52] "Watching apiserver" Jun 20 19:50:15.110411 kubelet[2425]: I0620 19:50:15.110348 2425 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:50:15.215124 kubelet[2425]: I0620 19:50:15.213800 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.215896 kubelet[2425]: I0620 19:50:15.215678 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.218412 kubelet[2425]: E0620 19:50:15.218377 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-0-0-4524070979.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:15.219937 kubelet[2425]: E0620 19:50:15.219859 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-0-0-4524070979.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:16.225239 kubelet[2425]: I0620 19:50:16.222395 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:16.225239 kubelet[2425]: I0620 19:50:16.222743 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:16.329020 kubelet[2425]: I0620 19:50:16.328934 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:16.438061 kubelet[2425]: I0620 19:50:16.437058 2425 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:50:16.438061 kubelet[2425]: I0620 19:50:16.437553 2425 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:50:16.438614 kubelet[2425]: I0620 19:50:16.438540 2425 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:50:18.090300 systemd[1]: Reload requested from client PID 2706 ('systemctl') (unit session-11.scope)... Jun 20 19:50:18.090430 systemd[1]: Reloading... Jun 20 19:50:18.269259 zram_generator::config[2757]: No configuration found. Jun 20 19:50:18.402145 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:50:18.625013 systemd[1]: Reloading finished in 532 ms. Jun 20 19:50:18.658640 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:50:18.676984 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:50:18.677321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:50:18.677421 systemd[1]: kubelet.service: Consumed 2.695s CPU time, 132M memory peak. Jun 20 19:50:18.679747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:50:18.961139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:50:18.974545 (kubelet)[2815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:50:19.048616 kubelet[2815]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:50:19.048616 kubelet[2815]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:50:19.048616 kubelet[2815]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:50:19.048616 kubelet[2815]: I0620 19:50:19.048136 2815 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:50:19.062424 kubelet[2815]: I0620 19:50:19.062357 2815 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:50:19.062424 kubelet[2815]: I0620 19:50:19.062393 2815 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:50:19.062712 kubelet[2815]: I0620 19:50:19.062655 2815 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:50:19.064696 kubelet[2815]: I0620 19:50:19.064655 2815 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 19:50:19.068220 kubelet[2815]: I0620 19:50:19.068110 2815 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:50:19.079416 kubelet[2815]: I0620 19:50:19.079326 2815 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:50:19.086039 kubelet[2815]: I0620 19:50:19.086001 2815 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:50:19.086538 kubelet[2815]: I0620 19:50:19.086448 2815 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:50:19.086781 kubelet[2815]: I0620 19:50:19.086516 2815 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-1-0-0-4524070979.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:50:19.086781 kubelet[2815]: I0620 19:50:19.086785 2815 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:50:19.087487 kubelet[2815]: I0620 19:50:19.086804 2815 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:50:19.087487 kubelet[2815]: I0620 19:50:19.086913 2815 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:50:19.088982 kubelet[2815]: I0620 19:50:19.088691 2815 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:50:19.088982 kubelet[2815]: I0620 19:50:19.088769 2815 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:50:19.088982 kubelet[2815]: I0620 19:50:19.088887 2815 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:50:19.089666 kubelet[2815]: I0620 19:50:19.089539 2815 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:50:19.114198 kubelet[2815]: I0620 19:50:19.107776 2815 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:50:19.114198 kubelet[2815]: I0620 19:50:19.113977 2815 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:50:19.120239 kubelet[2815]: I0620 19:50:19.118606 2815 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:50:19.120239 kubelet[2815]: I0620 19:50:19.118681 2815 server.go:1289] "Started kubelet" Jun 20 19:50:19.121336 kubelet[2815]: I0620 19:50:19.121250 2815 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:50:19.122287 kubelet[2815]: I0620 19:50:19.121903 2815 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:50:19.122659 kubelet[2815]: I0620 19:50:19.122494 2815 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:50:19.125828 kubelet[2815]: I0620 19:50:19.125485 2815 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:50:19.131896 kubelet[2815]: I0620 19:50:19.121904 2815 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:50:19.132436 kubelet[2815]: I0620 19:50:19.132415 2815 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:50:19.135474 kubelet[2815]: I0620 19:50:19.135451 2815 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:50:19.135779 kubelet[2815]: I0620 19:50:19.135760 2815 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:50:19.136079 kubelet[2815]: I0620 19:50:19.136061 2815 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:50:19.137053 kubelet[2815]: I0620 19:50:19.137009 2815 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:50:19.142979 kubelet[2815]: E0620 19:50:19.142897 2815 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:50:19.143981 kubelet[2815]: I0620 19:50:19.143939 2815 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:50:19.145841 kubelet[2815]: I0620 19:50:19.145803 2815 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:50:19.145841 kubelet[2815]: I0620 19:50:19.145820 2815 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:50:19.146319 kubelet[2815]: I0620 19:50:19.146300 2815 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:50:19.146420 kubelet[2815]: I0620 19:50:19.146409 2815 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:50:19.146521 kubelet[2815]: I0620 19:50:19.146508 2815 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:50:19.146587 kubelet[2815]: I0620 19:50:19.146578 2815 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:50:19.146695 kubelet[2815]: E0620 19:50:19.146675 2815 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:50:19.233745 kubelet[2815]: I0620 19:50:19.233604 2815 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:50:19.233745 kubelet[2815]: I0620 19:50:19.233630 2815 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:50:19.233745 kubelet[2815]: I0620 19:50:19.233668 2815 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:50:19.235553 kubelet[2815]: I0620 19:50:19.235517 2815 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:50:19.235644 kubelet[2815]: I0620 19:50:19.235549 2815 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:50:19.235644 kubelet[2815]: I0620 19:50:19.235607 2815 policy_none.go:49] "None policy: Start" Jun 20 19:50:19.235741 kubelet[2815]: I0620 19:50:19.235660 2815 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:50:19.235741 kubelet[2815]: I0620 19:50:19.235708 2815 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:50:19.235909 kubelet[2815]: I0620 19:50:19.235878 2815 state_mem.go:75] "Updated machine memory state" Jun 20 19:50:19.245700 kubelet[2815]: E0620 19:50:19.245611 2815 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:50:19.246489 kubelet[2815]: I0620 19:50:19.246120 2815 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:50:19.246489 kubelet[2815]: I0620 19:50:19.246197 2815 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:50:19.251801 kubelet[2815]: I0620 19:50:19.250664 2815 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:50:19.255127 kubelet[2815]: I0620 19:50:19.252958 2815 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.255339 kubelet[2815]: I0620 19:50:19.253116 2815 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.255422 kubelet[2815]: I0620 19:50:19.253310 2815 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.255494 kubelet[2815]: E0620 19:50:19.254411 2815 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:50:19.279752 kubelet[2815]: I0620 19:50:19.279718 2815 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:50:19.279752 kubelet[2815]: I0620 19:50:19.279772 2815 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:50:19.279997 kubelet[2815]: E0620 19:50:19.279820 2815 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-0-0-4524070979.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.279997 kubelet[2815]: E0620 19:50:19.279892 2815 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-0-0-4524070979.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.279997 kubelet[2815]: I0620 19:50:19.279956 2815 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:50:19.280415 kubelet[2815]: E0620 19:50:19.280002 2815 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.338359 kubelet[2815]: I0620 19:50:19.338117 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9968fa17baff220b3559756fd553a6e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"b9968fa17baff220b3559756fd553a6e\") " pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.338359 kubelet[2815]: I0620 19:50:19.338181 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b7f3a0768780450411a7965d8d4587b-k8s-certs\") pod \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"9b7f3a0768780450411a7965d8d4587b\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.338359 kubelet[2815]: I0620 19:50:19.338206 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b7f3a0768780450411a7965d8d4587b-kubeconfig\") pod \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"9b7f3a0768780450411a7965d8d4587b\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.338359 kubelet[2815]: I0620 19:50:19.338226 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b7f3a0768780450411a7965d8d4587b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"9b7f3a0768780450411a7965d8d4587b\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.338633 kubelet[2815]: I0620 19:50:19.338255 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3ebf22a1115171285fb45d1f95992d4-kubeconfig\") pod \"kube-scheduler-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"e3ebf22a1115171285fb45d1f95992d4\") " pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.338633 kubelet[2815]: I0620 19:50:19.338278 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9968fa17baff220b3559756fd553a6e-ca-certs\") pod \"kube-apiserver-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"b9968fa17baff220b3559756fd553a6e\") " pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.338633 kubelet[2815]: I0620 19:50:19.338296 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9968fa17baff220b3559756fd553a6e-k8s-certs\") pod \"kube-apiserver-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"b9968fa17baff220b3559756fd553a6e\") " pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.338633 kubelet[2815]: I0620 19:50:19.338314 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b7f3a0768780450411a7965d8d4587b-ca-certs\") pod \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"9b7f3a0768780450411a7965d8d4587b\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.338633 kubelet[2815]: I0620 19:50:19.338335 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b7f3a0768780450411a7965d8d4587b-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal\" (UID: \"9b7f3a0768780450411a7965d8d4587b\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.361617 kubelet[2815]: I0620 19:50:19.361514 2815 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.376200 kubelet[2815]: I0620 19:50:19.376046 2815 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:19.376200 kubelet[2815]: I0620 19:50:19.376148 2815 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:20.093386 kubelet[2815]: I0620 19:50:20.093319 2815 apiserver.go:52] "Watching apiserver" Jun 20 19:50:20.137396 kubelet[2815]: I0620 19:50:20.137324 2815 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:50:20.197762 kubelet[2815]: I0620 19:50:20.197607 2815 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:20.199084 kubelet[2815]: I0620 19:50:20.199007 2815 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:20.219533 kubelet[2815]: I0620 19:50:20.219237 2815 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:50:20.220174 kubelet[2815]: E0620 19:50:20.219961 2815 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-0-0-4524070979.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:20.220174 kubelet[2815]: I0620 19:50:20.219422 2815 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:50:20.220174 kubelet[2815]: E0620 19:50:20.220128 2815 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-0-0-4524070979.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:50:20.248024 kubelet[2815]: I0620 19:50:20.247919 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344-1-0-0-4524070979.novalocal" podStartSLOduration=4.247881284 podStartE2EDuration="4.247881284s" podCreationTimestamp="2025-06-20 19:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:50:20.247711232 +0000 UTC m=+1.262895981" watchObservedRunningTime="2025-06-20 19:50:20.247881284 +0000 UTC m=+1.263066033" Jun 20 19:50:20.263987 kubelet[2815]: I0620 19:50:20.263644 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" podStartSLOduration=4.263628047 podStartE2EDuration="4.263628047s" podCreationTimestamp="2025-06-20 19:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:50:20.263293995 +0000 UTC m=+1.278478754" watchObservedRunningTime="2025-06-20 19:50:20.263628047 +0000 UTC m=+1.278812826" Jun 20 19:50:20.276029 kubelet[2815]: I0620 19:50:20.275959 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" podStartSLOduration=4.275945686 podStartE2EDuration="4.275945686s" podCreationTimestamp="2025-06-20 19:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:50:20.27558774 +0000 UTC m=+1.290772499" watchObservedRunningTime="2025-06-20 19:50:20.275945686 +0000 UTC m=+1.291130445" Jun 20 19:50:23.205595 kubelet[2815]: I0620 19:50:23.205507 2815 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:50:23.207381 containerd[1551]: time="2025-06-20T19:50:23.206633590Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:50:23.209324 kubelet[2815]: I0620 19:50:23.208421 2815 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:50:24.175018 kubelet[2815]: I0620 19:50:24.174931 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/efc85c73-3481-4ab2-a525-5442fb3ddf9b-kube-proxy\") pod \"kube-proxy-cfkc6\" (UID: \"efc85c73-3481-4ab2-a525-5442fb3ddf9b\") " pod="kube-system/kube-proxy-cfkc6" Jun 20 19:50:24.175351 kubelet[2815]: I0620 19:50:24.175024 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efc85c73-3481-4ab2-a525-5442fb3ddf9b-xtables-lock\") pod \"kube-proxy-cfkc6\" (UID: \"efc85c73-3481-4ab2-a525-5442fb3ddf9b\") " pod="kube-system/kube-proxy-cfkc6" Jun 20 19:50:24.175351 kubelet[2815]: I0620 19:50:24.175078 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efc85c73-3481-4ab2-a525-5442fb3ddf9b-lib-modules\") pod \"kube-proxy-cfkc6\" (UID: \"efc85c73-3481-4ab2-a525-5442fb3ddf9b\") " pod="kube-system/kube-proxy-cfkc6" Jun 20 19:50:24.175351 kubelet[2815]: I0620 19:50:24.175129 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzjk\" (UniqueName: \"kubernetes.io/projected/efc85c73-3481-4ab2-a525-5442fb3ddf9b-kube-api-access-4pzjk\") pod \"kube-proxy-cfkc6\" (UID: \"efc85c73-3481-4ab2-a525-5442fb3ddf9b\") " pod="kube-system/kube-proxy-cfkc6" Jun 20 19:50:24.191736 systemd[1]: Created slice kubepods-besteffort-podefc85c73_3481_4ab2_a525_5442fb3ddf9b.slice - libcontainer container kubepods-besteffort-podefc85c73_3481_4ab2_a525_5442fb3ddf9b.slice. Jun 20 19:50:24.437033 systemd[1]: Created slice kubepods-besteffort-pod6dceac94_5d8a_4a17_880a_b168f5c68e50.slice - libcontainer container kubepods-besteffort-pod6dceac94_5d8a_4a17_880a_b168f5c68e50.slice. Jun 20 19:50:24.476897 kubelet[2815]: I0620 19:50:24.476831 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6dceac94-5d8a-4a17-880a-b168f5c68e50-var-lib-calico\") pod \"tigera-operator-68f7c7984d-hn5z4\" (UID: \"6dceac94-5d8a-4a17-880a-b168f5c68e50\") " pod="tigera-operator/tigera-operator-68f7c7984d-hn5z4" Jun 20 19:50:24.477460 kubelet[2815]: I0620 19:50:24.476975 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7vk\" (UniqueName: \"kubernetes.io/projected/6dceac94-5d8a-4a17-880a-b168f5c68e50-kube-api-access-sx7vk\") pod \"tigera-operator-68f7c7984d-hn5z4\" (UID: \"6dceac94-5d8a-4a17-880a-b168f5c68e50\") " pod="tigera-operator/tigera-operator-68f7c7984d-hn5z4" Jun 20 19:50:24.504691 containerd[1551]: time="2025-06-20T19:50:24.504601998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cfkc6,Uid:efc85c73-3481-4ab2-a525-5442fb3ddf9b,Namespace:kube-system,Attempt:0,}" Jun 20 19:50:24.544613 containerd[1551]: time="2025-06-20T19:50:24.544534100Z" level=info msg="connecting to shim a91b3a89ca0964ce5203167fcb02cce45fb309f5ccf57602c507a043bce9d472" address="unix:///run/containerd/s/30127bee41261fddf2ea4289b9e1a3dc789906bd77820b705319dd78018a0dc9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:50:24.624389 systemd[1]: Started cri-containerd-a91b3a89ca0964ce5203167fcb02cce45fb309f5ccf57602c507a043bce9d472.scope - libcontainer container a91b3a89ca0964ce5203167fcb02cce45fb309f5ccf57602c507a043bce9d472. Jun 20 19:50:24.657807 containerd[1551]: time="2025-06-20T19:50:24.657701833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cfkc6,Uid:efc85c73-3481-4ab2-a525-5442fb3ddf9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a91b3a89ca0964ce5203167fcb02cce45fb309f5ccf57602c507a043bce9d472\"" Jun 20 19:50:24.667112 containerd[1551]: time="2025-06-20T19:50:24.667044063Z" level=info msg="CreateContainer within sandbox \"a91b3a89ca0964ce5203167fcb02cce45fb309f5ccf57602c507a043bce9d472\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:50:24.683098 containerd[1551]: time="2025-06-20T19:50:24.683050684Z" level=info msg="Container 98b9b0e92f91ee8290d0d28d026566fc7353eb53bc432bdcfd53c62e83e4d691: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:50:24.699740 containerd[1551]: time="2025-06-20T19:50:24.699621090Z" level=info msg="CreateContainer within sandbox \"a91b3a89ca0964ce5203167fcb02cce45fb309f5ccf57602c507a043bce9d472\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"98b9b0e92f91ee8290d0d28d026566fc7353eb53bc432bdcfd53c62e83e4d691\"" Jun 20 19:50:24.702328 containerd[1551]: time="2025-06-20T19:50:24.702287738Z" level=info msg="StartContainer for \"98b9b0e92f91ee8290d0d28d026566fc7353eb53bc432bdcfd53c62e83e4d691\"" Jun 20 19:50:24.704330 containerd[1551]: time="2025-06-20T19:50:24.704291173Z" level=info msg="connecting to shim 98b9b0e92f91ee8290d0d28d026566fc7353eb53bc432bdcfd53c62e83e4d691" address="unix:///run/containerd/s/30127bee41261fddf2ea4289b9e1a3dc789906bd77820b705319dd78018a0dc9" protocol=ttrpc version=3 Jun 20 19:50:24.724371 systemd[1]: Started cri-containerd-98b9b0e92f91ee8290d0d28d026566fc7353eb53bc432bdcfd53c62e83e4d691.scope - libcontainer container 98b9b0e92f91ee8290d0d28d026566fc7353eb53bc432bdcfd53c62e83e4d691. Jun 20 19:50:24.742603 containerd[1551]: time="2025-06-20T19:50:24.742554593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-hn5z4,Uid:6dceac94-5d8a-4a17-880a-b168f5c68e50,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:50:24.778516 containerd[1551]: time="2025-06-20T19:50:24.777373424Z" level=info msg="connecting to shim aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc" address="unix:///run/containerd/s/8d170f4a54531fc0fcffe4942d5a7bdbff5c1b2584a97fa97cd7568a511883ba" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:50:24.786592 containerd[1551]: time="2025-06-20T19:50:24.786535424Z" level=info msg="StartContainer for \"98b9b0e92f91ee8290d0d28d026566fc7353eb53bc432bdcfd53c62e83e4d691\" returns successfully" Jun 20 19:50:24.818412 systemd[1]: Started cri-containerd-aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc.scope - libcontainer container aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc. Jun 20 19:50:24.937916 containerd[1551]: time="2025-06-20T19:50:24.937825361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-hn5z4,Uid:6dceac94-5d8a-4a17-880a-b168f5c68e50,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc\"" Jun 20 19:50:24.942206 containerd[1551]: time="2025-06-20T19:50:24.942118000Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:50:25.256084 kubelet[2815]: I0620 19:50:25.255953 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cfkc6" podStartSLOduration=1.255591775 podStartE2EDuration="1.255591775s" podCreationTimestamp="2025-06-20 19:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:50:25.254151185 +0000 UTC m=+6.269335934" watchObservedRunningTime="2025-06-20 19:50:25.255591775 +0000 UTC m=+6.270776555" Jun 20 19:50:26.746827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870156967.mount: Deactivated successfully. Jun 20 19:50:27.893680 containerd[1551]: time="2025-06-20T19:50:27.893605680Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:27.894446 containerd[1551]: time="2025-06-20T19:50:27.894387276Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 20 19:50:27.898194 containerd[1551]: time="2025-06-20T19:50:27.898127789Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:27.907009 containerd[1551]: time="2025-06-20T19:50:27.906922587Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:27.909495 containerd[1551]: time="2025-06-20T19:50:27.909338048Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 2.967104621s" Jun 20 19:50:27.909603 containerd[1551]: time="2025-06-20T19:50:27.909550881Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 20 19:50:27.921193 containerd[1551]: time="2025-06-20T19:50:27.921113355Z" level=info msg="CreateContainer within sandbox \"aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:50:27.935433 containerd[1551]: time="2025-06-20T19:50:27.934718206Z" level=info msg="Container 70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:50:27.937637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305275782.mount: Deactivated successfully. Jun 20 19:50:27.948366 containerd[1551]: time="2025-06-20T19:50:27.948321804Z" level=info msg="CreateContainer within sandbox \"aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e\"" Jun 20 19:50:27.949886 containerd[1551]: time="2025-06-20T19:50:27.949833399Z" level=info msg="StartContainer for \"70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e\"" Jun 20 19:50:27.951042 containerd[1551]: time="2025-06-20T19:50:27.951014700Z" level=info msg="connecting to shim 70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e" address="unix:///run/containerd/s/8d170f4a54531fc0fcffe4942d5a7bdbff5c1b2584a97fa97cd7568a511883ba" protocol=ttrpc version=3 Jun 20 19:50:27.981421 systemd[1]: Started cri-containerd-70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e.scope - libcontainer container 70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e. Jun 20 19:50:28.022099 containerd[1551]: time="2025-06-20T19:50:28.022047222Z" level=info msg="StartContainer for \"70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e\" returns successfully" Jun 20 19:50:31.484499 kubelet[2815]: I0620 19:50:31.483853 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-hn5z4" podStartSLOduration=4.513265385 podStartE2EDuration="7.483757564s" podCreationTimestamp="2025-06-20 19:50:24 +0000 UTC" firstStartedPulling="2025-06-20 19:50:24.940972616 +0000 UTC m=+5.956157365" lastFinishedPulling="2025-06-20 19:50:27.911464795 +0000 UTC m=+8.926649544" observedRunningTime="2025-06-20 19:50:28.269550991 +0000 UTC m=+9.284735790" watchObservedRunningTime="2025-06-20 19:50:31.483757564 +0000 UTC m=+12.498942333" Jun 20 19:50:35.107089 sudo[1836]: pam_unix(sudo:session): session closed for user root Jun 20 19:50:35.368820 sshd[1835]: Connection closed by 172.24.4.1 port 37936 Jun 20 19:50:35.371921 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jun 20 19:50:35.384621 systemd[1]: sshd@8-172.24.4.123:22-172.24.4.1:37936.service: Deactivated successfully. Jun 20 19:50:35.394749 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:50:35.396545 systemd[1]: session-11.scope: Consumed 8.519s CPU time, 236.3M memory peak. Jun 20 19:50:35.404447 systemd-logind[1537]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:50:35.409393 systemd-logind[1537]: Removed session 11. Jun 20 19:50:40.240680 systemd[1]: Created slice kubepods-besteffort-pod03e217a4_96c8_4361_ad15_0c23b53a1707.slice - libcontainer container kubepods-besteffort-pod03e217a4_96c8_4361_ad15_0c23b53a1707.slice. Jun 20 19:50:40.303639 kubelet[2815]: I0620 19:50:40.303530 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03e217a4-96c8-4361-ad15-0c23b53a1707-tigera-ca-bundle\") pod \"calico-typha-fb7bcf74d-h9mnv\" (UID: \"03e217a4-96c8-4361-ad15-0c23b53a1707\") " pod="calico-system/calico-typha-fb7bcf74d-h9mnv" Jun 20 19:50:40.303639 kubelet[2815]: I0620 19:50:40.303636 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/03e217a4-96c8-4361-ad15-0c23b53a1707-typha-certs\") pod \"calico-typha-fb7bcf74d-h9mnv\" (UID: \"03e217a4-96c8-4361-ad15-0c23b53a1707\") " pod="calico-system/calico-typha-fb7bcf74d-h9mnv" Jun 20 19:50:40.303639 kubelet[2815]: I0620 19:50:40.303666 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w229t\" (UniqueName: \"kubernetes.io/projected/03e217a4-96c8-4361-ad15-0c23b53a1707-kube-api-access-w229t\") pod \"calico-typha-fb7bcf74d-h9mnv\" (UID: \"03e217a4-96c8-4361-ad15-0c23b53a1707\") " pod="calico-system/calico-typha-fb7bcf74d-h9mnv" Jun 20 19:50:40.550580 systemd[1]: Created slice kubepods-besteffort-pod36329d5c_866f_4f2f_9cb3_180935fb21f9.slice - libcontainer container kubepods-besteffort-pod36329d5c_866f_4f2f_9cb3_180935fb21f9.slice. Jun 20 19:50:40.553866 containerd[1551]: time="2025-06-20T19:50:40.553386993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb7bcf74d-h9mnv,Uid:03e217a4-96c8-4361-ad15-0c23b53a1707,Namespace:calico-system,Attempt:0,}" Jun 20 19:50:40.606908 kubelet[2815]: I0620 19:50:40.606820 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/36329d5c-866f-4f2f-9cb3-180935fb21f9-node-certs\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.607531 kubelet[2815]: I0620 19:50:40.607152 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/36329d5c-866f-4f2f-9cb3-180935fb21f9-var-run-calico\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.607531 kubelet[2815]: I0620 19:50:40.607339 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/36329d5c-866f-4f2f-9cb3-180935fb21f9-cni-net-dir\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.608476 kubelet[2815]: I0620 19:50:40.608271 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36329d5c-866f-4f2f-9cb3-180935fb21f9-lib-modules\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.608476 kubelet[2815]: I0620 19:50:40.608329 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/36329d5c-866f-4f2f-9cb3-180935fb21f9-var-lib-calico\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.608476 kubelet[2815]: I0620 19:50:40.608431 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/36329d5c-866f-4f2f-9cb3-180935fb21f9-flexvol-driver-host\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.608917 kubelet[2815]: I0620 19:50:40.608674 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/36329d5c-866f-4f2f-9cb3-180935fb21f9-cni-log-dir\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.608917 kubelet[2815]: I0620 19:50:40.608739 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36329d5c-866f-4f2f-9cb3-180935fb21f9-xtables-lock\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.608917 kubelet[2815]: I0620 19:50:40.608775 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/36329d5c-866f-4f2f-9cb3-180935fb21f9-cni-bin-dir\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.609580 kubelet[2815]: I0620 19:50:40.608792 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/36329d5c-866f-4f2f-9cb3-180935fb21f9-policysync\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.609580 kubelet[2815]: I0620 19:50:40.609468 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36329d5c-866f-4f2f-9cb3-180935fb21f9-tigera-ca-bundle\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.609580 kubelet[2815]: I0620 19:50:40.609517 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrm8h\" (UniqueName: \"kubernetes.io/projected/36329d5c-866f-4f2f-9cb3-180935fb21f9-kube-api-access-wrm8h\") pod \"calico-node-tz88k\" (UID: \"36329d5c-866f-4f2f-9cb3-180935fb21f9\") " pod="calico-system/calico-node-tz88k" Jun 20 19:50:40.610900 containerd[1551]: time="2025-06-20T19:50:40.610835614Z" level=info msg="connecting to shim f5ce4229c6ffe477dcca00949a5582fe60523eb85c85d3e293ac5c7549cdb567" address="unix:///run/containerd/s/f103a79b2ddc49967ffa9c042d8a4cbb059e97b582c0b0328fc55c0bf4535ff0" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:50:40.675587 systemd[1]: Started cri-containerd-f5ce4229c6ffe477dcca00949a5582fe60523eb85c85d3e293ac5c7549cdb567.scope - libcontainer container f5ce4229c6ffe477dcca00949a5582fe60523eb85c85d3e293ac5c7549cdb567. Jun 20 19:50:40.721993 kubelet[2815]: E0620 19:50:40.721868 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.721993 kubelet[2815]: W0620 19:50:40.721927 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.722582 kubelet[2815]: E0620 19:50:40.722232 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.731317 kubelet[2815]: E0620 19:50:40.731283 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.731532 kubelet[2815]: W0620 19:50:40.731453 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.731532 kubelet[2815]: E0620 19:50:40.731479 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.803414 containerd[1551]: time="2025-06-20T19:50:40.803226213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb7bcf74d-h9mnv,Uid:03e217a4-96c8-4361-ad15-0c23b53a1707,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5ce4229c6ffe477dcca00949a5582fe60523eb85c85d3e293ac5c7549cdb567\"" Jun 20 19:50:40.808893 containerd[1551]: time="2025-06-20T19:50:40.808832026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:50:40.834720 kubelet[2815]: E0620 19:50:40.834630 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ldnw" podUID="c9d6a569-963e-4451-b36f-587404b621dd" Jun 20 19:50:40.857877 containerd[1551]: time="2025-06-20T19:50:40.857822986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tz88k,Uid:36329d5c-866f-4f2f-9cb3-180935fb21f9,Namespace:calico-system,Attempt:0,}" Jun 20 19:50:40.880512 kubelet[2815]: E0620 19:50:40.880155 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.880512 kubelet[2815]: W0620 19:50:40.880452 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.880512 kubelet[2815]: E0620 19:50:40.880477 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.881982 kubelet[2815]: E0620 19:50:40.881968 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.882140 kubelet[2815]: W0620 19:50:40.882086 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.885986 kubelet[2815]: E0620 19:50:40.882238 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.885986 kubelet[2815]: E0620 19:50:40.882725 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.885986 kubelet[2815]: W0620 19:50:40.882737 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.885986 kubelet[2815]: E0620 19:50:40.882761 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.885986 kubelet[2815]: E0620 19:50:40.884268 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.885986 kubelet[2815]: W0620 19:50:40.884284 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.885986 kubelet[2815]: E0620 19:50:40.884322 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.885986 kubelet[2815]: E0620 19:50:40.884615 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.885986 kubelet[2815]: W0620 19:50:40.884626 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.885986 kubelet[2815]: E0620 19:50:40.884637 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.887394 kubelet[2815]: E0620 19:50:40.884838 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.887394 kubelet[2815]: W0620 19:50:40.884848 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.887394 kubelet[2815]: E0620 19:50:40.884858 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.887394 kubelet[2815]: E0620 19:50:40.885049 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.887394 kubelet[2815]: W0620 19:50:40.885060 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.887394 kubelet[2815]: E0620 19:50:40.885069 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.887394 kubelet[2815]: E0620 19:50:40.885332 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.887394 kubelet[2815]: W0620 19:50:40.885386 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.887394 kubelet[2815]: E0620 19:50:40.885398 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.887394 kubelet[2815]: E0620 19:50:40.885610 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.889320 kubelet[2815]: W0620 19:50:40.885638 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.889320 kubelet[2815]: E0620 19:50:40.885649 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.889320 kubelet[2815]: E0620 19:50:40.885881 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.889320 kubelet[2815]: W0620 19:50:40.885892 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.889320 kubelet[2815]: E0620 19:50:40.885905 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.889320 kubelet[2815]: E0620 19:50:40.887069 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.889320 kubelet[2815]: W0620 19:50:40.887080 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.889320 kubelet[2815]: E0620 19:50:40.887092 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.889320 kubelet[2815]: E0620 19:50:40.888204 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.889320 kubelet[2815]: W0620 19:50:40.888214 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.890194 kubelet[2815]: E0620 19:50:40.888226 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.890194 kubelet[2815]: E0620 19:50:40.889681 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.890194 kubelet[2815]: W0620 19:50:40.889693 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.890194 kubelet[2815]: E0620 19:50:40.889704 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.890194 kubelet[2815]: E0620 19:50:40.889891 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.890194 kubelet[2815]: W0620 19:50:40.889904 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.890194 kubelet[2815]: E0620 19:50:40.889914 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.890194 kubelet[2815]: E0620 19:50:40.890153 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.891805 kubelet[2815]: W0620 19:50:40.890162 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.891805 kubelet[2815]: E0620 19:50:40.890311 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.891805 kubelet[2815]: E0620 19:50:40.890557 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.891805 kubelet[2815]: W0620 19:50:40.890567 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.891805 kubelet[2815]: E0620 19:50:40.890577 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.893451 kubelet[2815]: E0620 19:50:40.893333 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.893451 kubelet[2815]: W0620 19:50:40.893357 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.893451 kubelet[2815]: E0620 19:50:40.893400 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.893756 kubelet[2815]: E0620 19:50:40.893729 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.893756 kubelet[2815]: W0620 19:50:40.893747 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.894011 kubelet[2815]: E0620 19:50:40.893759 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.894624 kubelet[2815]: E0620 19:50:40.894591 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.894624 kubelet[2815]: W0620 19:50:40.894608 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.894624 kubelet[2815]: E0620 19:50:40.894618 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.896264 kubelet[2815]: E0620 19:50:40.896238 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.896264 kubelet[2815]: W0620 19:50:40.896255 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.896393 kubelet[2815]: E0620 19:50:40.896268 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.914177 kubelet[2815]: E0620 19:50:40.914126 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.914177 kubelet[2815]: W0620 19:50:40.914151 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.914177 kubelet[2815]: E0620 19:50:40.914189 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.914722 kubelet[2815]: I0620 19:50:40.914233 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c9d6a569-963e-4451-b36f-587404b621dd-varrun\") pod \"csi-node-driver-5ldnw\" (UID: \"c9d6a569-963e-4451-b36f-587404b621dd\") " pod="calico-system/csi-node-driver-5ldnw" Jun 20 19:50:40.915398 kubelet[2815]: E0620 19:50:40.915362 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.915398 kubelet[2815]: W0620 19:50:40.915380 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.915398 kubelet[2815]: E0620 19:50:40.915394 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.915800 kubelet[2815]: I0620 19:50:40.915435 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9d6a569-963e-4451-b36f-587404b621dd-socket-dir\") pod \"csi-node-driver-5ldnw\" (UID: \"c9d6a569-963e-4451-b36f-587404b621dd\") " pod="calico-system/csi-node-driver-5ldnw" Jun 20 19:50:40.916338 kubelet[2815]: E0620 19:50:40.916283 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.916338 kubelet[2815]: W0620 19:50:40.916300 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.916338 kubelet[2815]: E0620 19:50:40.916312 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.916682 kubelet[2815]: I0620 19:50:40.916475 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7flmj\" (UniqueName: \"kubernetes.io/projected/c9d6a569-963e-4451-b36f-587404b621dd-kube-api-access-7flmj\") pod \"csi-node-driver-5ldnw\" (UID: \"c9d6a569-963e-4451-b36f-587404b621dd\") " pod="calico-system/csi-node-driver-5ldnw" Jun 20 19:50:40.918335 kubelet[2815]: E0620 19:50:40.918313 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.918335 kubelet[2815]: W0620 19:50:40.918333 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.918689 kubelet[2815]: E0620 19:50:40.918345 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.918689 kubelet[2815]: E0620 19:50:40.918474 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.918689 kubelet[2815]: W0620 19:50:40.918483 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.918689 kubelet[2815]: E0620 19:50:40.918492 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.918689 kubelet[2815]: E0620 19:50:40.918660 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.918689 kubelet[2815]: W0620 19:50:40.918670 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.918689 kubelet[2815]: E0620 19:50:40.918679 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.919445 kubelet[2815]: E0620 19:50:40.919235 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.919445 kubelet[2815]: W0620 19:50:40.919251 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.919445 kubelet[2815]: E0620 19:50:40.919261 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.919445 kubelet[2815]: E0620 19:50:40.919411 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.919445 kubelet[2815]: W0620 19:50:40.919421 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.919445 kubelet[2815]: I0620 19:50:40.919306 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9d6a569-963e-4451-b36f-587404b621dd-registration-dir\") pod \"csi-node-driver-5ldnw\" (UID: \"c9d6a569-963e-4451-b36f-587404b621dd\") " pod="calico-system/csi-node-driver-5ldnw" Jun 20 19:50:40.919445 kubelet[2815]: E0620 19:50:40.919445 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.919953 kubelet[2815]: E0620 19:50:40.919826 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.919953 kubelet[2815]: W0620 19:50:40.919843 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.919953 kubelet[2815]: E0620 19:50:40.919860 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.920312 containerd[1551]: time="2025-06-20T19:50:40.919945372Z" level=info msg="connecting to shim 37f23cc3529cd4747ad8f2118c09e0f557310de4a95239d0637eb1f095e9b6bf" address="unix:///run/containerd/s/5b51048ad20e2f105ada92df96cd96c3839dc7e22ce3cf91c32d66b1ddf97b8b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:50:40.920440 kubelet[2815]: E0620 19:50:40.920419 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.920440 kubelet[2815]: W0620 19:50:40.920434 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.920867 kubelet[2815]: E0620 19:50:40.920445 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.920905 kubelet[2815]: E0620 19:50:40.920884 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.920905 kubelet[2815]: W0620 19:50:40.920895 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.920960 kubelet[2815]: E0620 19:50:40.920904 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.923391 kubelet[2815]: E0620 19:50:40.923364 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.923391 kubelet[2815]: W0620 19:50:40.923384 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.923519 kubelet[2815]: E0620 19:50:40.923402 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.923519 kubelet[2815]: I0620 19:50:40.923437 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9d6a569-963e-4451-b36f-587404b621dd-kubelet-dir\") pod \"csi-node-driver-5ldnw\" (UID: \"c9d6a569-963e-4451-b36f-587404b621dd\") " pod="calico-system/csi-node-driver-5ldnw" Jun 20 19:50:40.923989 kubelet[2815]: E0620 19:50:40.923948 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.923989 kubelet[2815]: W0620 19:50:40.923961 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.923989 kubelet[2815]: E0620 19:50:40.923973 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.925317 kubelet[2815]: E0620 19:50:40.925296 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.925317 kubelet[2815]: W0620 19:50:40.925310 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.925503 kubelet[2815]: E0620 19:50:40.925321 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.926396 kubelet[2815]: E0620 19:50:40.926370 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:40.926396 kubelet[2815]: W0620 19:50:40.926385 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:40.926396 kubelet[2815]: E0620 19:50:40.926397 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:40.968408 systemd[1]: Started cri-containerd-37f23cc3529cd4747ad8f2118c09e0f557310de4a95239d0637eb1f095e9b6bf.scope - libcontainer container 37f23cc3529cd4747ad8f2118c09e0f557310de4a95239d0637eb1f095e9b6bf. Jun 20 19:50:41.026160 kubelet[2815]: E0620 19:50:41.026041 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.026160 kubelet[2815]: W0620 19:50:41.026067 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.026160 kubelet[2815]: E0620 19:50:41.026088 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.027459 kubelet[2815]: E0620 19:50:41.027431 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.027459 kubelet[2815]: W0620 19:50:41.027450 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.027592 kubelet[2815]: E0620 19:50:41.027464 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.027902 kubelet[2815]: E0620 19:50:41.027882 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.027902 kubelet[2815]: W0620 19:50:41.027897 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.028316 kubelet[2815]: E0620 19:50:41.027907 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.030305 kubelet[2815]: E0620 19:50:41.028992 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.030305 kubelet[2815]: W0620 19:50:41.029016 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.030305 kubelet[2815]: E0620 19:50:41.029060 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.030861 kubelet[2815]: E0620 19:50:41.030758 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.030861 kubelet[2815]: W0620 19:50:41.030775 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.030861 kubelet[2815]: E0620 19:50:41.030792 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.031312 kubelet[2815]: E0620 19:50:41.031298 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.031531 kubelet[2815]: W0620 19:50:41.031437 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.031531 kubelet[2815]: E0620 19:50:41.031473 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.032079 kubelet[2815]: E0620 19:50:41.031878 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.032079 kubelet[2815]: W0620 19:50:41.032007 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.032079 kubelet[2815]: E0620 19:50:41.032019 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.032692 kubelet[2815]: E0620 19:50:41.032671 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.033110 kubelet[2815]: W0620 19:50:41.032865 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.033110 kubelet[2815]: E0620 19:50:41.032881 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.034410 kubelet[2815]: E0620 19:50:41.034396 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.034672 kubelet[2815]: W0620 19:50:41.034504 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.034672 kubelet[2815]: E0620 19:50:41.034523 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.035580 kubelet[2815]: E0620 19:50:41.035386 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.035580 kubelet[2815]: W0620 19:50:41.035400 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.035580 kubelet[2815]: E0620 19:50:41.035411 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.039772 kubelet[2815]: E0620 19:50:41.039493 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.039772 kubelet[2815]: W0620 19:50:41.039514 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.039772 kubelet[2815]: E0620 19:50:41.039540 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.040543 kubelet[2815]: E0620 19:50:41.040410 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.041214 kubelet[2815]: W0620 19:50:41.040793 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.041214 kubelet[2815]: E0620 19:50:41.040835 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.043029 kubelet[2815]: E0620 19:50:41.042968 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.043029 kubelet[2815]: W0620 19:50:41.042988 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.043382 kubelet[2815]: E0620 19:50:41.043008 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.044761 kubelet[2815]: E0620 19:50:41.044734 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.045136 kubelet[2815]: W0620 19:50:41.045065 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.045136 kubelet[2815]: E0620 19:50:41.045090 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.046033 kubelet[2815]: E0620 19:50:41.045864 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.046033 kubelet[2815]: W0620 19:50:41.045888 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.046033 kubelet[2815]: E0620 19:50:41.045906 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.050206 kubelet[2815]: E0620 19:50:41.049247 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.050206 kubelet[2815]: W0620 19:50:41.049276 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.050206 kubelet[2815]: E0620 19:50:41.049307 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.050787 kubelet[2815]: E0620 19:50:41.050773 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.050881 kubelet[2815]: W0620 19:50:41.050862 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.052668 kubelet[2815]: E0620 19:50:41.052650 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.052972 kubelet[2815]: E0620 19:50:41.052959 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.053060 kubelet[2815]: W0620 19:50:41.053047 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.053155 kubelet[2815]: E0620 19:50:41.053143 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.053583 kubelet[2815]: E0620 19:50:41.053497 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.054601 kubelet[2815]: W0620 19:50:41.054585 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.054746 kubelet[2815]: E0620 19:50:41.054730 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.058679 kubelet[2815]: E0620 19:50:41.057602 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.058894 kubelet[2815]: W0620 19:50:41.058875 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.059068 kubelet[2815]: E0620 19:50:41.058969 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.060114 kubelet[2815]: E0620 19:50:41.060099 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.062558 kubelet[2815]: W0620 19:50:41.062035 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.062558 kubelet[2815]: E0620 19:50:41.062540 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.065612 kubelet[2815]: E0620 19:50:41.065114 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.065612 kubelet[2815]: W0620 19:50:41.065158 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.065612 kubelet[2815]: E0620 19:50:41.065205 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.067117 kubelet[2815]: E0620 19:50:41.066405 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.067117 kubelet[2815]: W0620 19:50:41.066425 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.067117 kubelet[2815]: E0620 19:50:41.066442 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.067772 kubelet[2815]: E0620 19:50:41.067316 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.067772 kubelet[2815]: W0620 19:50:41.067334 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.067772 kubelet[2815]: E0620 19:50:41.067388 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.068077 kubelet[2815]: E0620 19:50:41.068027 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.068077 kubelet[2815]: W0620 19:50:41.068040 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.068077 kubelet[2815]: E0620 19:50:41.068059 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:41.072933 containerd[1551]: time="2025-06-20T19:50:41.072857254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tz88k,Uid:36329d5c-866f-4f2f-9cb3-180935fb21f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"37f23cc3529cd4747ad8f2118c09e0f557310de4a95239d0637eb1f095e9b6bf\"" Jun 20 19:50:41.103414 kubelet[2815]: E0620 19:50:41.103379 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:41.103414 kubelet[2815]: W0620 19:50:41.103404 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:41.103664 kubelet[2815]: E0620 19:50:41.103449 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:42.148418 kubelet[2815]: E0620 19:50:42.148233 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ldnw" podUID="c9d6a569-963e-4451-b36f-587404b621dd" Jun 20 19:50:42.936403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount961998767.mount: Deactivated successfully. Jun 20 19:50:44.148722 kubelet[2815]: E0620 19:50:44.148292 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ldnw" podUID="c9d6a569-963e-4451-b36f-587404b621dd" Jun 20 19:50:44.534571 containerd[1551]: time="2025-06-20T19:50:44.534441436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:44.535578 containerd[1551]: time="2025-06-20T19:50:44.535533966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 20 19:50:44.536729 containerd[1551]: time="2025-06-20T19:50:44.536664538Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:44.539230 containerd[1551]: time="2025-06-20T19:50:44.539157148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:44.540068 containerd[1551]: time="2025-06-20T19:50:44.539883227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 3.730985647s" Jun 20 19:50:44.540068 containerd[1551]: time="2025-06-20T19:50:44.539934764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 20 19:50:44.542057 containerd[1551]: time="2025-06-20T19:50:44.541430845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:50:44.566035 containerd[1551]: time="2025-06-20T19:50:44.564967949Z" level=info msg="CreateContainer within sandbox \"f5ce4229c6ffe477dcca00949a5582fe60523eb85c85d3e293ac5c7549cdb567\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:50:44.588234 containerd[1551]: time="2025-06-20T19:50:44.586914973Z" level=info msg="Container c91f1b9006c861b4d261b7a3b6928624739e929f655975440b8a5be5b49fa353: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:50:44.605042 containerd[1551]: time="2025-06-20T19:50:44.604906969Z" level=info msg="CreateContainer within sandbox \"f5ce4229c6ffe477dcca00949a5582fe60523eb85c85d3e293ac5c7549cdb567\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c91f1b9006c861b4d261b7a3b6928624739e929f655975440b8a5be5b49fa353\"" Jun 20 19:50:44.606023 containerd[1551]: time="2025-06-20T19:50:44.606005040Z" level=info msg="StartContainer for \"c91f1b9006c861b4d261b7a3b6928624739e929f655975440b8a5be5b49fa353\"" Jun 20 19:50:44.607828 containerd[1551]: time="2025-06-20T19:50:44.607732807Z" level=info msg="connecting to shim c91f1b9006c861b4d261b7a3b6928624739e929f655975440b8a5be5b49fa353" address="unix:///run/containerd/s/f103a79b2ddc49967ffa9c042d8a4cbb059e97b582c0b0328fc55c0bf4535ff0" protocol=ttrpc version=3 Jun 20 19:50:44.649596 systemd[1]: Started cri-containerd-c91f1b9006c861b4d261b7a3b6928624739e929f655975440b8a5be5b49fa353.scope - libcontainer container c91f1b9006c861b4d261b7a3b6928624739e929f655975440b8a5be5b49fa353. Jun 20 19:50:44.718381 containerd[1551]: time="2025-06-20T19:50:44.718327721Z" level=info msg="StartContainer for \"c91f1b9006c861b4d261b7a3b6928624739e929f655975440b8a5be5b49fa353\" returns successfully" Jun 20 19:50:45.358126 kubelet[2815]: I0620 19:50:45.357991 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fb7bcf74d-h9mnv" podStartSLOduration=1.6250417019999999 podStartE2EDuration="5.357940908s" podCreationTimestamp="2025-06-20 19:50:40 +0000 UTC" firstStartedPulling="2025-06-20 19:50:40.808052646 +0000 UTC m=+21.823237405" lastFinishedPulling="2025-06-20 19:50:44.540951862 +0000 UTC m=+25.556136611" observedRunningTime="2025-06-20 19:50:45.357021674 +0000 UTC m=+26.372206443" watchObservedRunningTime="2025-06-20 19:50:45.357940908 +0000 UTC m=+26.373125667" Jun 20 19:50:45.429640 kubelet[2815]: E0620 19:50:45.429591 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.429894 kubelet[2815]: W0620 19:50:45.429819 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.429894 kubelet[2815]: E0620 19:50:45.429851 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.430406 kubelet[2815]: E0620 19:50:45.430333 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.430406 kubelet[2815]: W0620 19:50:45.430346 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.430406 kubelet[2815]: E0620 19:50:45.430356 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.430820 kubelet[2815]: E0620 19:50:45.430775 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.430820 kubelet[2815]: W0620 19:50:45.430787 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.430820 kubelet[2815]: E0620 19:50:45.430797 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.431543 kubelet[2815]: E0620 19:50:45.431469 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.431543 kubelet[2815]: W0620 19:50:45.431482 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.431543 kubelet[2815]: E0620 19:50:45.431493 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.432041 kubelet[2815]: E0620 19:50:45.431968 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.432041 kubelet[2815]: W0620 19:50:45.431982 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.432041 kubelet[2815]: E0620 19:50:45.431993 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.432574 kubelet[2815]: E0620 19:50:45.432501 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.432574 kubelet[2815]: W0620 19:50:45.432514 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.432574 kubelet[2815]: E0620 19:50:45.432525 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.433047 kubelet[2815]: E0620 19:50:45.432975 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.433047 kubelet[2815]: W0620 19:50:45.432999 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.433047 kubelet[2815]: E0620 19:50:45.433011 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.433577 kubelet[2815]: E0620 19:50:45.433503 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.433577 kubelet[2815]: W0620 19:50:45.433515 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.433577 kubelet[2815]: E0620 19:50:45.433526 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.434103 kubelet[2815]: E0620 19:50:45.434010 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.434103 kubelet[2815]: W0620 19:50:45.434022 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.434103 kubelet[2815]: E0620 19:50:45.434033 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.434578 kubelet[2815]: E0620 19:50:45.434518 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.434578 kubelet[2815]: W0620 19:50:45.434530 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.434578 kubelet[2815]: E0620 19:50:45.434541 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.435010 kubelet[2815]: E0620 19:50:45.434945 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.435010 kubelet[2815]: W0620 19:50:45.434958 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.435010 kubelet[2815]: E0620 19:50:45.434968 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.435514 kubelet[2815]: E0620 19:50:45.435428 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.435514 kubelet[2815]: W0620 19:50:45.435440 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.435514 kubelet[2815]: E0620 19:50:45.435451 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.435927 kubelet[2815]: E0620 19:50:45.435913 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.436068 kubelet[2815]: W0620 19:50:45.436005 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.436068 kubelet[2815]: E0620 19:50:45.436023 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.436489 kubelet[2815]: E0620 19:50:45.436422 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.436489 kubelet[2815]: W0620 19:50:45.436436 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.436489 kubelet[2815]: E0620 19:50:45.436446 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.436910 kubelet[2815]: E0620 19:50:45.436825 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.436910 kubelet[2815]: W0620 19:50:45.436838 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.436910 kubelet[2815]: E0620 19:50:45.436849 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.473424 kubelet[2815]: E0620 19:50:45.473373 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.473424 kubelet[2815]: W0620 19:50:45.473418 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.473716 kubelet[2815]: E0620 19:50:45.473456 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.473972 kubelet[2815]: E0620 19:50:45.473939 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.474031 kubelet[2815]: W0620 19:50:45.473971 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.474031 kubelet[2815]: E0620 19:50:45.473997 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.474557 kubelet[2815]: E0620 19:50:45.474424 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.474557 kubelet[2815]: W0620 19:50:45.474449 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.474557 kubelet[2815]: E0620 19:50:45.474469 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.474970 kubelet[2815]: E0620 19:50:45.474898 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.474970 kubelet[2815]: W0620 19:50:45.474912 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.474970 kubelet[2815]: E0620 19:50:45.474925 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.475369 kubelet[2815]: E0620 19:50:45.475251 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.475369 kubelet[2815]: W0620 19:50:45.475265 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.475369 kubelet[2815]: E0620 19:50:45.475275 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.476900 kubelet[2815]: E0620 19:50:45.475593 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.476900 kubelet[2815]: W0620 19:50:45.475606 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.476900 kubelet[2815]: E0620 19:50:45.475616 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.476900 kubelet[2815]: E0620 19:50:45.476534 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.476900 kubelet[2815]: W0620 19:50:45.476562 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.476900 kubelet[2815]: E0620 19:50:45.476588 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.477800 kubelet[2815]: E0620 19:50:45.477785 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.477883 kubelet[2815]: W0620 19:50:45.477871 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.477953 kubelet[2815]: E0620 19:50:45.477941 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.478344 kubelet[2815]: E0620 19:50:45.478322 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.478447 kubelet[2815]: W0620 19:50:45.478432 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.478552 kubelet[2815]: E0620 19:50:45.478517 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.478779 kubelet[2815]: E0620 19:50:45.478767 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.478866 kubelet[2815]: W0620 19:50:45.478853 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.478982 kubelet[2815]: E0620 19:50:45.478969 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.479250 kubelet[2815]: E0620 19:50:45.479238 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.479356 kubelet[2815]: W0620 19:50:45.479342 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.479419 kubelet[2815]: E0620 19:50:45.479408 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.479704 kubelet[2815]: E0620 19:50:45.479691 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.479909 kubelet[2815]: W0620 19:50:45.479810 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.479909 kubelet[2815]: E0620 19:50:45.479827 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.480280 kubelet[2815]: E0620 19:50:45.480240 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.480411 kubelet[2815]: W0620 19:50:45.480255 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.480411 kubelet[2815]: E0620 19:50:45.480382 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.480932 kubelet[2815]: E0620 19:50:45.480898 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.480932 kubelet[2815]: W0620 19:50:45.480909 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.480932 kubelet[2815]: E0620 19:50:45.480919 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.481224 kubelet[2815]: E0620 19:50:45.481203 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.481398 kubelet[2815]: W0620 19:50:45.481297 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.481398 kubelet[2815]: E0620 19:50:45.481314 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.481815 kubelet[2815]: E0620 19:50:45.481773 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.481815 kubelet[2815]: W0620 19:50:45.481788 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.481815 kubelet[2815]: E0620 19:50:45.481800 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.482380 kubelet[2815]: E0620 19:50:45.482344 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.482380 kubelet[2815]: W0620 19:50:45.482356 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.482380 kubelet[2815]: E0620 19:50:45.482367 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:45.482898 kubelet[2815]: E0620 19:50:45.482845 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:45.482898 kubelet[2815]: W0620 19:50:45.482858 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:45.482898 kubelet[2815]: E0620 19:50:45.482872 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.148812 kubelet[2815]: E0620 19:50:46.148599 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ldnw" podUID="c9d6a569-963e-4451-b36f-587404b621dd" Jun 20 19:50:46.337629 kubelet[2815]: I0620 19:50:46.337479 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:50:46.344766 kubelet[2815]: E0620 19:50:46.344374 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.344766 kubelet[2815]: W0620 19:50:46.344451 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.344766 kubelet[2815]: E0620 19:50:46.344490 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.345753 kubelet[2815]: E0620 19:50:46.345598 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.345753 kubelet[2815]: W0620 19:50:46.345631 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.345753 kubelet[2815]: E0620 19:50:46.345658 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.346793 kubelet[2815]: E0620 19:50:46.346528 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.346793 kubelet[2815]: W0620 19:50:46.346559 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.346793 kubelet[2815]: E0620 19:50:46.346584 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.347870 kubelet[2815]: E0620 19:50:46.347419 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.347870 kubelet[2815]: W0620 19:50:46.347454 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.347870 kubelet[2815]: E0620 19:50:46.347481 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.348595 kubelet[2815]: E0620 19:50:46.348463 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.348595 kubelet[2815]: W0620 19:50:46.348550 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.348837 kubelet[2815]: E0620 19:50:46.348594 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.349144 kubelet[2815]: E0620 19:50:46.349107 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.349144 kubelet[2815]: W0620 19:50:46.349140 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.349430 kubelet[2815]: E0620 19:50:46.349216 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.349816 kubelet[2815]: E0620 19:50:46.349708 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.349816 kubelet[2815]: W0620 19:50:46.349816 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.349986 kubelet[2815]: E0620 19:50:46.349884 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.350426 kubelet[2815]: E0620 19:50:46.350390 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.350426 kubelet[2815]: W0620 19:50:46.350421 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.350656 kubelet[2815]: E0620 19:50:46.350485 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.351083 kubelet[2815]: E0620 19:50:46.351048 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.351083 kubelet[2815]: W0620 19:50:46.351079 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.351378 kubelet[2815]: E0620 19:50:46.351103 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.351624 kubelet[2815]: E0620 19:50:46.351589 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.351624 kubelet[2815]: W0620 19:50:46.351621 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.351886 kubelet[2815]: E0620 19:50:46.351644 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.352050 kubelet[2815]: E0620 19:50:46.352015 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.352050 kubelet[2815]: W0620 19:50:46.352047 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.352432 kubelet[2815]: E0620 19:50:46.352070 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.352432 kubelet[2815]: E0620 19:50:46.352412 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.352432 kubelet[2815]: W0620 19:50:46.352434 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.352775 kubelet[2815]: E0620 19:50:46.352456 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.352905 kubelet[2815]: E0620 19:50:46.352801 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.352905 kubelet[2815]: W0620 19:50:46.352823 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.352905 kubelet[2815]: E0620 19:50:46.352845 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.353273 kubelet[2815]: E0620 19:50:46.353159 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.353273 kubelet[2815]: W0620 19:50:46.353235 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.353273 kubelet[2815]: E0620 19:50:46.353258 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.353617 kubelet[2815]: E0620 19:50:46.353564 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.353617 kubelet[2815]: W0620 19:50:46.353586 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.353617 kubelet[2815]: E0620 19:50:46.353608 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.384504 kubelet[2815]: E0620 19:50:46.384450 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.385373 kubelet[2815]: W0620 19:50:46.385297 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.385373 kubelet[2815]: E0620 19:50:46.385352 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.385989 kubelet[2815]: E0620 19:50:46.385924 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.385989 kubelet[2815]: W0620 19:50:46.385967 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.385989 kubelet[2815]: E0620 19:50:46.385991 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.386469 kubelet[2815]: E0620 19:50:46.386433 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.386469 kubelet[2815]: W0620 19:50:46.386465 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.386644 kubelet[2815]: E0620 19:50:46.386489 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.386925 kubelet[2815]: E0620 19:50:46.386889 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.386925 kubelet[2815]: W0620 19:50:46.386921 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.387118 kubelet[2815]: E0620 19:50:46.386943 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.387465 kubelet[2815]: E0620 19:50:46.387431 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.387465 kubelet[2815]: W0620 19:50:46.387461 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.387639 kubelet[2815]: E0620 19:50:46.387489 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.388388 kubelet[2815]: E0620 19:50:46.388348 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.388388 kubelet[2815]: W0620 19:50:46.388386 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.388646 kubelet[2815]: E0620 19:50:46.388410 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.388949 kubelet[2815]: E0620 19:50:46.388887 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.388949 kubelet[2815]: W0620 19:50:46.388924 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.388949 kubelet[2815]: E0620 19:50:46.388948 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.389676 kubelet[2815]: E0620 19:50:46.389639 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.389876 kubelet[2815]: W0620 19:50:46.389843 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.390051 kubelet[2815]: E0620 19:50:46.390021 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.390882 kubelet[2815]: E0620 19:50:46.390801 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.390882 kubelet[2815]: W0620 19:50:46.390839 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.390882 kubelet[2815]: E0620 19:50:46.390862 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.391455 kubelet[2815]: E0620 19:50:46.391300 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.391455 kubelet[2815]: W0620 19:50:46.391324 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.391455 kubelet[2815]: E0620 19:50:46.391347 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.391850 kubelet[2815]: E0620 19:50:46.391779 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.391850 kubelet[2815]: W0620 19:50:46.391802 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.391850 kubelet[2815]: E0620 19:50:46.391825 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.392319 kubelet[2815]: E0620 19:50:46.392279 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.392319 kubelet[2815]: W0620 19:50:46.392313 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.392585 kubelet[2815]: E0620 19:50:46.392336 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.392723 kubelet[2815]: E0620 19:50:46.392677 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.392723 kubelet[2815]: W0620 19:50:46.392712 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.393025 kubelet[2815]: E0620 19:50:46.392734 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.393195 kubelet[2815]: E0620 19:50:46.393107 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.393195 kubelet[2815]: W0620 19:50:46.393130 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.393195 kubelet[2815]: E0620 19:50:46.393151 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.394816 kubelet[2815]: E0620 19:50:46.394539 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.394816 kubelet[2815]: W0620 19:50:46.394573 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.394816 kubelet[2815]: E0620 19:50:46.394600 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.396000 kubelet[2815]: E0620 19:50:46.395322 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.396000 kubelet[2815]: W0620 19:50:46.395353 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.396000 kubelet[2815]: E0620 19:50:46.395379 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.396583 kubelet[2815]: E0620 19:50:46.396548 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.396791 kubelet[2815]: W0620 19:50:46.396760 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.396969 kubelet[2815]: E0620 19:50:46.396939 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.397725 kubelet[2815]: E0620 19:50:46.397583 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:50:46.397725 kubelet[2815]: W0620 19:50:46.397621 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:50:46.397725 kubelet[2815]: E0620 19:50:46.397647 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:50:46.854003 containerd[1551]: time="2025-06-20T19:50:46.853255221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:46.855612 containerd[1551]: time="2025-06-20T19:50:46.855586768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 20 19:50:46.857271 containerd[1551]: time="2025-06-20T19:50:46.857243171Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:46.860442 containerd[1551]: time="2025-06-20T19:50:46.860416053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:46.861090 containerd[1551]: time="2025-06-20T19:50:46.861041453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 2.319370596s" Jun 20 19:50:46.861139 containerd[1551]: time="2025-06-20T19:50:46.861091377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 20 19:50:46.868845 containerd[1551]: time="2025-06-20T19:50:46.868794292Z" level=info msg="CreateContainer within sandbox \"37f23cc3529cd4747ad8f2118c09e0f557310de4a95239d0637eb1f095e9b6bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:50:46.890344 containerd[1551]: time="2025-06-20T19:50:46.888652046Z" level=info msg="Container 9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:50:46.893019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432775535.mount: Deactivated successfully. Jun 20 19:50:46.907690 containerd[1551]: time="2025-06-20T19:50:46.907565198Z" level=info msg="CreateContainer within sandbox \"37f23cc3529cd4747ad8f2118c09e0f557310de4a95239d0637eb1f095e9b6bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc\"" Jun 20 19:50:46.908312 containerd[1551]: time="2025-06-20T19:50:46.908286067Z" level=info msg="StartContainer for \"9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc\"" Jun 20 19:50:46.910612 containerd[1551]: time="2025-06-20T19:50:46.910557630Z" level=info msg="connecting to shim 9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc" address="unix:///run/containerd/s/5b51048ad20e2f105ada92df96cd96c3839dc7e22ce3cf91c32d66b1ddf97b8b" protocol=ttrpc version=3 Jun 20 19:50:46.944345 systemd[1]: Started cri-containerd-9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc.scope - libcontainer container 9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc. Jun 20 19:50:47.007125 containerd[1551]: time="2025-06-20T19:50:47.006735208Z" level=info msg="StartContainer for \"9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc\" returns successfully" Jun 20 19:50:47.021919 systemd[1]: cri-containerd-9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc.scope: Deactivated successfully. Jun 20 19:50:47.026722 containerd[1551]: time="2025-06-20T19:50:47.026604044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc\" id:\"9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc\" pid:3520 exited_at:{seconds:1750449047 nanos:25440110}" Jun 20 19:50:47.026810 containerd[1551]: time="2025-06-20T19:50:47.026759838Z" level=info msg="received exit event container_id:\"9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc\" id:\"9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc\" pid:3520 exited_at:{seconds:1750449047 nanos:25440110}" Jun 20 19:50:47.063161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9341d59894e3c63b9ae1a237cbe6c81d03a84f4ae6f421c327a1d42f5884f2bc-rootfs.mount: Deactivated successfully. Jun 20 19:50:48.148432 kubelet[2815]: E0620 19:50:48.148243 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ldnw" podUID="c9d6a569-963e-4451-b36f-587404b621dd" Jun 20 19:50:48.364904 containerd[1551]: time="2025-06-20T19:50:48.364478639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:50:50.149255 kubelet[2815]: E0620 19:50:50.147637 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ldnw" podUID="c9d6a569-963e-4451-b36f-587404b621dd" Jun 20 19:50:52.149660 kubelet[2815]: E0620 19:50:52.149583 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ldnw" podUID="c9d6a569-963e-4451-b36f-587404b621dd" Jun 20 19:50:54.115535 containerd[1551]: time="2025-06-20T19:50:54.115245153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:54.117749 containerd[1551]: time="2025-06-20T19:50:54.117713747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 20 19:50:54.118970 containerd[1551]: time="2025-06-20T19:50:54.118906985Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:54.122898 containerd[1551]: time="2025-06-20T19:50:54.122812657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:50:54.123763 containerd[1551]: time="2025-06-20T19:50:54.123527084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 5.757425005s" Jun 20 19:50:54.123763 containerd[1551]: time="2025-06-20T19:50:54.123584412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 20 19:50:54.132689 containerd[1551]: time="2025-06-20T19:50:54.131881542Z" level=info msg="CreateContainer within sandbox \"37f23cc3529cd4747ad8f2118c09e0f557310de4a95239d0637eb1f095e9b6bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:50:54.147487 kubelet[2815]: E0620 19:50:54.147415 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ldnw" podUID="c9d6a569-963e-4451-b36f-587404b621dd" Jun 20 19:50:54.149400 containerd[1551]: time="2025-06-20T19:50:54.149365340Z" level=info msg="Container 0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:50:54.173040 containerd[1551]: time="2025-06-20T19:50:54.172978781Z" level=info msg="CreateContainer within sandbox \"37f23cc3529cd4747ad8f2118c09e0f557310de4a95239d0637eb1f095e9b6bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806\"" Jun 20 19:50:54.179147 containerd[1551]: time="2025-06-20T19:50:54.179096162Z" level=info msg="StartContainer for \"0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806\"" Jun 20 19:50:54.181828 containerd[1551]: time="2025-06-20T19:50:54.181795391Z" level=info msg="connecting to shim 0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806" address="unix:///run/containerd/s/5b51048ad20e2f105ada92df96cd96c3839dc7e22ce3cf91c32d66b1ddf97b8b" protocol=ttrpc version=3 Jun 20 19:50:54.220428 systemd[1]: Started cri-containerd-0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806.scope - libcontainer container 0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806. Jun 20 19:50:54.329250 containerd[1551]: time="2025-06-20T19:50:54.329162678Z" level=info msg="StartContainer for \"0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806\" returns successfully" Jun 20 19:50:56.133673 systemd[1]: cri-containerd-0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806.scope: Deactivated successfully. Jun 20 19:50:56.135467 systemd[1]: cri-containerd-0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806.scope: Consumed 1.113s CPU time, 190.7M memory peak, 171.2M written to disk. Jun 20 19:50:56.141089 containerd[1551]: time="2025-06-20T19:50:56.140795706Z" level=info msg="received exit event container_id:\"0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806\" id:\"0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806\" pid:3580 exited_at:{seconds:1750449056 nanos:140222085}" Jun 20 19:50:56.142328 containerd[1551]: time="2025-06-20T19:50:56.142266258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806\" id:\"0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806\" pid:3580 exited_at:{seconds:1750449056 nanos:140222085}" Jun 20 19:50:56.148451 kubelet[2815]: E0620 19:50:56.147687 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ldnw" podUID="c9d6a569-963e-4451-b36f-587404b621dd" Jun 20 19:50:56.183767 kubelet[2815]: I0620 19:50:56.183437 2815 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:50:56.208370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e73deda8e0a97afa9410aa6e6e80221a27e553eb5a2749e444d865c49c20806-rootfs.mount: Deactivated successfully. Jun 20 19:50:57.108375 systemd[1]: Created slice kubepods-burstable-pod0d4d76ec_5329_4cb9_bf25_ffb312bbf65b.slice - libcontainer container kubepods-burstable-pod0d4d76ec_5329_4cb9_bf25_ffb312bbf65b.slice. Jun 20 19:50:57.142631 systemd[1]: Created slice kubepods-besteffort-poda4c109fd_2fe5_4963_8e66_cb4e40a83c1d.slice - libcontainer container kubepods-besteffort-poda4c109fd_2fe5_4963_8e66_cb4e40a83c1d.slice. Jun 20 19:50:57.153244 systemd[1]: Created slice kubepods-besteffort-pod31d1e1de_5d04_4fb1_a1dd_f2993de9970d.slice - libcontainer container kubepods-besteffort-pod31d1e1de_5d04_4fb1_a1dd_f2993de9970d.slice. Jun 20 19:50:57.162640 systemd[1]: Created slice kubepods-besteffort-pod5b042ef5_21b7_44b2_9e1b_65fc81686302.slice - libcontainer container kubepods-besteffort-pod5b042ef5_21b7_44b2_9e1b_65fc81686302.slice. Jun 20 19:50:57.175890 systemd[1]: Created slice kubepods-besteffort-podab90c10e_3dbf_41dc_bad0_77c34086d0f4.slice - libcontainer container kubepods-besteffort-podab90c10e_3dbf_41dc_bad0_77c34086d0f4.slice. Jun 20 19:50:57.186357 systemd[1]: Created slice kubepods-besteffort-pod089eb096_a391_43eb_8096_57fcbb4ee864.slice - libcontainer container kubepods-besteffort-pod089eb096_a391_43eb_8096_57fcbb4ee864.slice. Jun 20 19:50:57.196672 systemd[1]: Created slice kubepods-burstable-pod5868c2f2_8e4f_4f1f_9b42_392b9fdd6abc.slice - libcontainer container kubepods-burstable-pod5868c2f2_8e4f_4f1f_9b42_392b9fdd6abc.slice. Jun 20 19:50:57.197975 kubelet[2815]: I0620 19:50:57.197570 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93ace87-f2d2-41b7-9607-5b2310ab1ded-config\") pod \"goldmane-5bd85449d4-9bxj5\" (UID: \"f93ace87-f2d2-41b7-9607-5b2310ab1ded\") " pod="calico-system/goldmane-5bd85449d4-9bxj5" Jun 20 19:50:57.199063 kubelet[2815]: I0620 19:50:57.198534 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-whisker-backend-key-pair\") pod \"whisker-6469cf95fb-cqtcz\" (UID: \"ab90c10e-3dbf-41dc-bad0-77c34086d0f4\") " pod="calico-system/whisker-6469cf95fb-cqtcz" Jun 20 19:50:57.199063 kubelet[2815]: I0620 19:50:57.198982 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mp6v\" (UniqueName: \"kubernetes.io/projected/5b042ef5-21b7-44b2-9e1b-65fc81686302-kube-api-access-4mp6v\") pod \"calico-kube-controllers-567b8bf998-7r74w\" (UID: \"5b042ef5-21b7-44b2-9e1b-65fc81686302\") " pod="calico-system/calico-kube-controllers-567b8bf998-7r74w" Jun 20 19:50:57.199291 kubelet[2815]: I0620 19:50:57.199015 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mfpr\" (UniqueName: \"kubernetes.io/projected/0d4d76ec-5329-4cb9-bf25-ffb312bbf65b-kube-api-access-5mfpr\") pod \"coredns-674b8bbfcf-fqp5r\" (UID: \"0d4d76ec-5329-4cb9-bf25-ffb312bbf65b\") " pod="kube-system/coredns-674b8bbfcf-fqp5r" Jun 20 19:50:57.199603 kubelet[2815]: I0620 19:50:57.199365 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31d1e1de-5d04-4fb1-a1dd-f2993de9970d-calico-apiserver-certs\") pod \"calico-apiserver-749bf4dccb-4wrqp\" (UID: \"31d1e1de-5d04-4fb1-a1dd-f2993de9970d\") " pod="calico-apiserver/calico-apiserver-749bf4dccb-4wrqp" Jun 20 19:50:57.199603 kubelet[2815]: I0620 19:50:57.199404 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f93ace87-f2d2-41b7-9607-5b2310ab1ded-goldmane-key-pair\") pod \"goldmane-5bd85449d4-9bxj5\" (UID: \"f93ace87-f2d2-41b7-9607-5b2310ab1ded\") " pod="calico-system/goldmane-5bd85449d4-9bxj5" Jun 20 19:50:57.199995 kubelet[2815]: I0620 19:50:57.199818 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47wz\" (UniqueName: \"kubernetes.io/projected/089eb096-a391-43eb-8096-57fcbb4ee864-kube-api-access-z47wz\") pod \"calico-apiserver-56bd6d945d-6pq8f\" (UID: \"089eb096-a391-43eb-8096-57fcbb4ee864\") " pod="calico-apiserver/calico-apiserver-56bd6d945d-6pq8f" Jun 20 19:50:57.200355 kubelet[2815]: I0620 19:50:57.200285 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4c109fd-2fe5-4963-8e66-cb4e40a83c1d-calico-apiserver-certs\") pod \"calico-apiserver-749bf4dccb-n8f2p\" (UID: \"a4c109fd-2fe5-4963-8e66-cb4e40a83c1d\") " pod="calico-apiserver/calico-apiserver-749bf4dccb-n8f2p" Jun 20 19:50:57.200699 kubelet[2815]: I0620 19:50:57.200331 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f93ace87-f2d2-41b7-9607-5b2310ab1ded-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-9bxj5\" (UID: \"f93ace87-f2d2-41b7-9607-5b2310ab1ded\") " pod="calico-system/goldmane-5bd85449d4-9bxj5" Jun 20 19:50:57.200699 kubelet[2815]: I0620 19:50:57.200654 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7zh6\" (UniqueName: \"kubernetes.io/projected/a4c109fd-2fe5-4963-8e66-cb4e40a83c1d-kube-api-access-x7zh6\") pod \"calico-apiserver-749bf4dccb-n8f2p\" (UID: \"a4c109fd-2fe5-4963-8e66-cb4e40a83c1d\") " pod="calico-apiserver/calico-apiserver-749bf4dccb-n8f2p" Jun 20 19:50:57.201005 kubelet[2815]: I0620 19:50:57.200682 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb2hv\" (UniqueName: \"kubernetes.io/projected/31d1e1de-5d04-4fb1-a1dd-f2993de9970d-kube-api-access-lb2hv\") pod \"calico-apiserver-749bf4dccb-4wrqp\" (UID: \"31d1e1de-5d04-4fb1-a1dd-f2993de9970d\") " pod="calico-apiserver/calico-apiserver-749bf4dccb-4wrqp" Jun 20 19:50:57.201005 kubelet[2815]: I0620 19:50:57.200962 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtrx\" (UniqueName: \"kubernetes.io/projected/f93ace87-f2d2-41b7-9607-5b2310ab1ded-kube-api-access-2mtrx\") pod \"goldmane-5bd85449d4-9bxj5\" (UID: \"f93ace87-f2d2-41b7-9607-5b2310ab1ded\") " pod="calico-system/goldmane-5bd85449d4-9bxj5" Jun 20 19:50:57.201369 kubelet[2815]: I0620 19:50:57.201300 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4k8v\" (UniqueName: \"kubernetes.io/projected/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-kube-api-access-q4k8v\") pod \"whisker-6469cf95fb-cqtcz\" (UID: \"ab90c10e-3dbf-41dc-bad0-77c34086d0f4\") " pod="calico-system/whisker-6469cf95fb-cqtcz" Jun 20 19:50:57.201369 kubelet[2815]: I0620 19:50:57.201329 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/089eb096-a391-43eb-8096-57fcbb4ee864-calico-apiserver-certs\") pod \"calico-apiserver-56bd6d945d-6pq8f\" (UID: \"089eb096-a391-43eb-8096-57fcbb4ee864\") " pod="calico-apiserver/calico-apiserver-56bd6d945d-6pq8f" Jun 20 19:50:57.201677 kubelet[2815]: I0620 19:50:57.201611 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-whisker-ca-bundle\") pod \"whisker-6469cf95fb-cqtcz\" (UID: \"ab90c10e-3dbf-41dc-bad0-77c34086d0f4\") " pod="calico-system/whisker-6469cf95fb-cqtcz" Jun 20 19:50:57.201677 kubelet[2815]: I0620 19:50:57.201642 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxh44\" (UniqueName: \"kubernetes.io/projected/5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc-kube-api-access-fxh44\") pod \"coredns-674b8bbfcf-v7lms\" (UID: \"5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc\") " pod="kube-system/coredns-674b8bbfcf-v7lms" Jun 20 19:50:57.202562 kubelet[2815]: I0620 19:50:57.202471 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d4d76ec-5329-4cb9-bf25-ffb312bbf65b-config-volume\") pod \"coredns-674b8bbfcf-fqp5r\" (UID: \"0d4d76ec-5329-4cb9-bf25-ffb312bbf65b\") " pod="kube-system/coredns-674b8bbfcf-fqp5r" Jun 20 19:50:57.202562 kubelet[2815]: I0620 19:50:57.202512 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b042ef5-21b7-44b2-9e1b-65fc81686302-tigera-ca-bundle\") pod \"calico-kube-controllers-567b8bf998-7r74w\" (UID: \"5b042ef5-21b7-44b2-9e1b-65fc81686302\") " pod="calico-system/calico-kube-controllers-567b8bf998-7r74w" Jun 20 19:50:57.202978 kubelet[2815]: I0620 19:50:57.202533 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc-config-volume\") pod \"coredns-674b8bbfcf-v7lms\" (UID: \"5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc\") " pod="kube-system/coredns-674b8bbfcf-v7lms" Jun 20 19:50:57.209630 systemd[1]: Created slice kubepods-besteffort-podf93ace87_f2d2_41b7_9607_5b2310ab1ded.slice - libcontainer container kubepods-besteffort-podf93ace87_f2d2_41b7_9607_5b2310ab1ded.slice. Jun 20 19:50:57.406340 containerd[1551]: time="2025-06-20T19:50:57.405909164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:50:57.426285 containerd[1551]: time="2025-06-20T19:50:57.425637720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqp5r,Uid:0d4d76ec-5329-4cb9-bf25-ffb312bbf65b,Namespace:kube-system,Attempt:0,}" Jun 20 19:50:57.452192 containerd[1551]: time="2025-06-20T19:50:57.451994081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749bf4dccb-n8f2p,Uid:a4c109fd-2fe5-4963-8e66-cb4e40a83c1d,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:50:57.459649 containerd[1551]: time="2025-06-20T19:50:57.459591702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749bf4dccb-4wrqp,Uid:31d1e1de-5d04-4fb1-a1dd-f2993de9970d,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:50:57.470519 containerd[1551]: time="2025-06-20T19:50:57.470371962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567b8bf998-7r74w,Uid:5b042ef5-21b7-44b2-9e1b-65fc81686302,Namespace:calico-system,Attempt:0,}" Jun 20 19:50:57.482691 containerd[1551]: time="2025-06-20T19:50:57.482614900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6469cf95fb-cqtcz,Uid:ab90c10e-3dbf-41dc-bad0-77c34086d0f4,Namespace:calico-system,Attempt:0,}" Jun 20 19:50:57.493968 containerd[1551]: time="2025-06-20T19:50:57.493715714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bd6d945d-6pq8f,Uid:089eb096-a391-43eb-8096-57fcbb4ee864,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:50:57.516454 containerd[1551]: time="2025-06-20T19:50:57.516394953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-9bxj5,Uid:f93ace87-f2d2-41b7-9607-5b2310ab1ded,Namespace:calico-system,Attempt:0,}" Jun 20 19:50:57.517819 containerd[1551]: time="2025-06-20T19:50:57.517600536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v7lms,Uid:5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc,Namespace:kube-system,Attempt:0,}" Jun 20 19:50:57.660448 containerd[1551]: time="2025-06-20T19:50:57.659850870Z" level=error msg="Failed to destroy network for sandbox \"287e9bd12e1f852612691a68f78315284b52de7ddb185e7dec70ca63915b7be0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.667703 containerd[1551]: time="2025-06-20T19:50:57.667621177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqp5r,Uid:0d4d76ec-5329-4cb9-bf25-ffb312bbf65b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"287e9bd12e1f852612691a68f78315284b52de7ddb185e7dec70ca63915b7be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.668602 kubelet[2815]: E0620 19:50:57.668326 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287e9bd12e1f852612691a68f78315284b52de7ddb185e7dec70ca63915b7be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.668602 kubelet[2815]: E0620 19:50:57.668491 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287e9bd12e1f852612691a68f78315284b52de7ddb185e7dec70ca63915b7be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fqp5r" Jun 20 19:50:57.668602 kubelet[2815]: E0620 19:50:57.668579 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287e9bd12e1f852612691a68f78315284b52de7ddb185e7dec70ca63915b7be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fqp5r" Jun 20 19:50:57.671604 kubelet[2815]: E0620 19:50:57.668679 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fqp5r_kube-system(0d4d76ec-5329-4cb9-bf25-ffb312bbf65b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fqp5r_kube-system(0d4d76ec-5329-4cb9-bf25-ffb312bbf65b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"287e9bd12e1f852612691a68f78315284b52de7ddb185e7dec70ca63915b7be0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fqp5r" podUID="0d4d76ec-5329-4cb9-bf25-ffb312bbf65b" Jun 20 19:50:57.684705 containerd[1551]: time="2025-06-20T19:50:57.684642210Z" level=error msg="Failed to destroy network for sandbox \"79cfa09ac15f2374d46b6649658028d8cf22e1c1d37bf733068367760993b96d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.689782 containerd[1551]: time="2025-06-20T19:50:57.689721404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749bf4dccb-n8f2p,Uid:a4c109fd-2fe5-4963-8e66-cb4e40a83c1d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cfa09ac15f2374d46b6649658028d8cf22e1c1d37bf733068367760993b96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.690750 kubelet[2815]: E0620 19:50:57.690112 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cfa09ac15f2374d46b6649658028d8cf22e1c1d37bf733068367760993b96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.690750 kubelet[2815]: E0620 19:50:57.690267 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cfa09ac15f2374d46b6649658028d8cf22e1c1d37bf733068367760993b96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749bf4dccb-n8f2p" Jun 20 19:50:57.690750 kubelet[2815]: E0620 19:50:57.690295 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cfa09ac15f2374d46b6649658028d8cf22e1c1d37bf733068367760993b96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749bf4dccb-n8f2p" Jun 20 19:50:57.692287 kubelet[2815]: E0620 19:50:57.690400 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-749bf4dccb-n8f2p_calico-apiserver(a4c109fd-2fe5-4963-8e66-cb4e40a83c1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-749bf4dccb-n8f2p_calico-apiserver(a4c109fd-2fe5-4963-8e66-cb4e40a83c1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79cfa09ac15f2374d46b6649658028d8cf22e1c1d37bf733068367760993b96d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-749bf4dccb-n8f2p" podUID="a4c109fd-2fe5-4963-8e66-cb4e40a83c1d" Jun 20 19:50:57.761431 containerd[1551]: time="2025-06-20T19:50:57.761277233Z" level=error msg="Failed to destroy network for sandbox \"15b23f2fa09b07b219b62026e0fadb603711bc38ca220ce9a87dd6e4a59c0add\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.768611 containerd[1551]: time="2025-06-20T19:50:57.768139648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6469cf95fb-cqtcz,Uid:ab90c10e-3dbf-41dc-bad0-77c34086d0f4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b23f2fa09b07b219b62026e0fadb603711bc38ca220ce9a87dd6e4a59c0add\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.770972 kubelet[2815]: E0620 19:50:57.769089 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b23f2fa09b07b219b62026e0fadb603711bc38ca220ce9a87dd6e4a59c0add\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.770972 kubelet[2815]: E0620 19:50:57.769923 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b23f2fa09b07b219b62026e0fadb603711bc38ca220ce9a87dd6e4a59c0add\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6469cf95fb-cqtcz" Jun 20 19:50:57.770972 kubelet[2815]: E0620 19:50:57.769968 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b23f2fa09b07b219b62026e0fadb603711bc38ca220ce9a87dd6e4a59c0add\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6469cf95fb-cqtcz" Jun 20 19:50:57.771159 kubelet[2815]: E0620 19:50:57.770067 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6469cf95fb-cqtcz_calico-system(ab90c10e-3dbf-41dc-bad0-77c34086d0f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6469cf95fb-cqtcz_calico-system(ab90c10e-3dbf-41dc-bad0-77c34086d0f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15b23f2fa09b07b219b62026e0fadb603711bc38ca220ce9a87dd6e4a59c0add\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6469cf95fb-cqtcz" podUID="ab90c10e-3dbf-41dc-bad0-77c34086d0f4" Jun 20 19:50:57.787803 containerd[1551]: time="2025-06-20T19:50:57.787751596Z" level=error msg="Failed to destroy network for sandbox \"8053c35139dbf46d84396ed31377e222e177d3286a331eb7ca826e2f698f4e60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.790054 containerd[1551]: time="2025-06-20T19:50:57.789980457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749bf4dccb-4wrqp,Uid:31d1e1de-5d04-4fb1-a1dd-f2993de9970d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8053c35139dbf46d84396ed31377e222e177d3286a331eb7ca826e2f698f4e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.790684 kubelet[2815]: E0620 19:50:57.790611 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8053c35139dbf46d84396ed31377e222e177d3286a331eb7ca826e2f698f4e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.790861 kubelet[2815]: E0620 19:50:57.790805 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8053c35139dbf46d84396ed31377e222e177d3286a331eb7ca826e2f698f4e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749bf4dccb-4wrqp" Jun 20 19:50:57.791026 kubelet[2815]: E0620 19:50:57.790839 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8053c35139dbf46d84396ed31377e222e177d3286a331eb7ca826e2f698f4e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749bf4dccb-4wrqp" Jun 20 19:50:57.791145 kubelet[2815]: E0620 19:50:57.791102 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-749bf4dccb-4wrqp_calico-apiserver(31d1e1de-5d04-4fb1-a1dd-f2993de9970d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-749bf4dccb-4wrqp_calico-apiserver(31d1e1de-5d04-4fb1-a1dd-f2993de9970d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8053c35139dbf46d84396ed31377e222e177d3286a331eb7ca826e2f698f4e60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-749bf4dccb-4wrqp" podUID="31d1e1de-5d04-4fb1-a1dd-f2993de9970d" Jun 20 19:50:57.799339 containerd[1551]: time="2025-06-20T19:50:57.799028663Z" level=error msg="Failed to destroy network for sandbox \"4b533ee990920804bc52f9ceb7b909371cca325d2312b45dbbf9c287fd581d0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.804100 containerd[1551]: time="2025-06-20T19:50:57.804037835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v7lms,Uid:5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b533ee990920804bc52f9ceb7b909371cca325d2312b45dbbf9c287fd581d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.806411 kubelet[2815]: E0620 19:50:57.806344 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b533ee990920804bc52f9ceb7b909371cca325d2312b45dbbf9c287fd581d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.806508 kubelet[2815]: E0620 19:50:57.806441 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b533ee990920804bc52f9ceb7b909371cca325d2312b45dbbf9c287fd581d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v7lms" Jun 20 19:50:57.806508 kubelet[2815]: E0620 19:50:57.806473 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b533ee990920804bc52f9ceb7b909371cca325d2312b45dbbf9c287fd581d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v7lms" Jun 20 19:50:57.806602 kubelet[2815]: E0620 19:50:57.806539 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-v7lms_kube-system(5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-v7lms_kube-system(5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b533ee990920804bc52f9ceb7b909371cca325d2312b45dbbf9c287fd581d0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v7lms" podUID="5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc" Jun 20 19:50:57.836695 containerd[1551]: time="2025-06-20T19:50:57.836600323Z" level=error msg="Failed to destroy network for sandbox \"2733e487973bbd27f75f324aa000e56787174d6e1aa43481b6063baa773ed6d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.838856 containerd[1551]: time="2025-06-20T19:50:57.838721041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bd6d945d-6pq8f,Uid:089eb096-a391-43eb-8096-57fcbb4ee864,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2733e487973bbd27f75f324aa000e56787174d6e1aa43481b6063baa773ed6d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.839415 kubelet[2815]: E0620 19:50:57.839322 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2733e487973bbd27f75f324aa000e56787174d6e1aa43481b6063baa773ed6d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.839526 kubelet[2815]: E0620 19:50:57.839459 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2733e487973bbd27f75f324aa000e56787174d6e1aa43481b6063baa773ed6d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56bd6d945d-6pq8f" Jun 20 19:50:57.839526 kubelet[2815]: E0620 19:50:57.839494 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2733e487973bbd27f75f324aa000e56787174d6e1aa43481b6063baa773ed6d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56bd6d945d-6pq8f" Jun 20 19:50:57.839777 kubelet[2815]: E0620 19:50:57.839584 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56bd6d945d-6pq8f_calico-apiserver(089eb096-a391-43eb-8096-57fcbb4ee864)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56bd6d945d-6pq8f_calico-apiserver(089eb096-a391-43eb-8096-57fcbb4ee864)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2733e487973bbd27f75f324aa000e56787174d6e1aa43481b6063baa773ed6d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56bd6d945d-6pq8f" podUID="089eb096-a391-43eb-8096-57fcbb4ee864" Jun 20 19:50:57.843906 containerd[1551]: time="2025-06-20T19:50:57.843847123Z" level=error msg="Failed to destroy network for sandbox \"273b8176d063535d7b81c27eaf0c4c5709c6e2eb90b5fb19b67ba3d14dff62dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.845593 containerd[1551]: time="2025-06-20T19:50:57.845515027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-9bxj5,Uid:f93ace87-f2d2-41b7-9607-5b2310ab1ded,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"273b8176d063535d7b81c27eaf0c4c5709c6e2eb90b5fb19b67ba3d14dff62dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.846822 kubelet[2815]: E0620 19:50:57.846502 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"273b8176d063535d7b81c27eaf0c4c5709c6e2eb90b5fb19b67ba3d14dff62dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.847125 containerd[1551]: time="2025-06-20T19:50:57.847083072Z" level=error msg="Failed to destroy network for sandbox \"e7cb26771ab32e760e2fb691208b3624c2ae15d1889bb137f2a2dd1d769312d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.847424 kubelet[2815]: E0620 19:50:57.847364 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"273b8176d063535d7b81c27eaf0c4c5709c6e2eb90b5fb19b67ba3d14dff62dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-9bxj5" Jun 20 19:50:57.847551 kubelet[2815]: E0620 19:50:57.847434 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"273b8176d063535d7b81c27eaf0c4c5709c6e2eb90b5fb19b67ba3d14dff62dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-9bxj5" Jun 20 19:50:57.847596 kubelet[2815]: E0620 19:50:57.847537 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-9bxj5_calico-system(f93ace87-f2d2-41b7-9607-5b2310ab1ded)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-9bxj5_calico-system(f93ace87-f2d2-41b7-9607-5b2310ab1ded)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"273b8176d063535d7b81c27eaf0c4c5709c6e2eb90b5fb19b67ba3d14dff62dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-9bxj5" podUID="f93ace87-f2d2-41b7-9607-5b2310ab1ded" Jun 20 19:50:57.849629 containerd[1551]: time="2025-06-20T19:50:57.849549452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567b8bf998-7r74w,Uid:5b042ef5-21b7-44b2-9e1b-65fc81686302,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7cb26771ab32e760e2fb691208b3624c2ae15d1889bb137f2a2dd1d769312d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.850139 kubelet[2815]: E0620 19:50:57.850107 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7cb26771ab32e760e2fb691208b3624c2ae15d1889bb137f2a2dd1d769312d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:57.850235 kubelet[2815]: E0620 19:50:57.850211 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7cb26771ab32e760e2fb691208b3624c2ae15d1889bb137f2a2dd1d769312d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-567b8bf998-7r74w" Jun 20 19:50:57.850324 kubelet[2815]: E0620 19:50:57.850234 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7cb26771ab32e760e2fb691208b3624c2ae15d1889bb137f2a2dd1d769312d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-567b8bf998-7r74w" Jun 20 19:50:57.850427 kubelet[2815]: E0620 19:50:57.850312 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-567b8bf998-7r74w_calico-system(5b042ef5-21b7-44b2-9e1b-65fc81686302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-567b8bf998-7r74w_calico-system(5b042ef5-21b7-44b2-9e1b-65fc81686302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7cb26771ab32e760e2fb691208b3624c2ae15d1889bb137f2a2dd1d769312d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-567b8bf998-7r74w" podUID="5b042ef5-21b7-44b2-9e1b-65fc81686302" Jun 20 19:50:58.163299 systemd[1]: Created slice kubepods-besteffort-podc9d6a569_963e_4451_b36f_587404b621dd.slice - libcontainer container kubepods-besteffort-podc9d6a569_963e_4451_b36f_587404b621dd.slice. Jun 20 19:50:58.171256 containerd[1551]: time="2025-06-20T19:50:58.171132425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5ldnw,Uid:c9d6a569-963e-4451-b36f-587404b621dd,Namespace:calico-system,Attempt:0,}" Jun 20 19:50:58.293580 containerd[1551]: time="2025-06-20T19:50:58.293454975Z" level=error msg="Failed to destroy network for sandbox \"0b1d31b11a8f98a35bfd40ed5ee9dc66e356c93595b087f58223a57b53ee67f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:58.298417 containerd[1551]: time="2025-06-20T19:50:58.298266434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5ldnw,Uid:c9d6a569-963e-4451-b36f-587404b621dd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d31b11a8f98a35bfd40ed5ee9dc66e356c93595b087f58223a57b53ee67f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:58.299894 kubelet[2815]: E0620 19:50:58.299376 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d31b11a8f98a35bfd40ed5ee9dc66e356c93595b087f58223a57b53ee67f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:50:58.299894 kubelet[2815]: E0620 19:50:58.299487 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d31b11a8f98a35bfd40ed5ee9dc66e356c93595b087f58223a57b53ee67f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5ldnw" Jun 20 19:50:58.299894 kubelet[2815]: E0620 19:50:58.299568 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d31b11a8f98a35bfd40ed5ee9dc66e356c93595b087f58223a57b53ee67f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5ldnw" Jun 20 19:50:58.300789 kubelet[2815]: E0620 19:50:58.299671 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5ldnw_calico-system(c9d6a569-963e-4451-b36f-587404b621dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5ldnw_calico-system(c9d6a569-963e-4451-b36f-587404b621dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b1d31b11a8f98a35bfd40ed5ee9dc66e356c93595b087f58223a57b53ee67f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5ldnw" podUID="c9d6a569-963e-4451-b36f-587404b621dd" Jun 20 19:51:01.894244 kubelet[2815]: I0620 19:51:01.893882 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:51:08.153913 containerd[1551]: time="2025-06-20T19:51:08.153629784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6469cf95fb-cqtcz,Uid:ab90c10e-3dbf-41dc-bad0-77c34086d0f4,Namespace:calico-system,Attempt:0,}" Jun 20 19:51:08.372202 containerd[1551]: time="2025-06-20T19:51:08.372067882Z" level=error msg="Failed to destroy network for sandbox \"ebe35a68e0385a21a8af2d8d0eeac9f110d8eefaefaaf5ef3d73507eec78a530\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:08.378024 systemd[1]: run-netns-cni\x2d07a3fb14\x2d7023\x2d3388\x2dcf7b\x2d0c346aaf13af.mount: Deactivated successfully. Jun 20 19:51:08.383983 containerd[1551]: time="2025-06-20T19:51:08.383620095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6469cf95fb-cqtcz,Uid:ab90c10e-3dbf-41dc-bad0-77c34086d0f4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe35a68e0385a21a8af2d8d0eeac9f110d8eefaefaaf5ef3d73507eec78a530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:08.386160 kubelet[2815]: E0620 19:51:08.385961 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe35a68e0385a21a8af2d8d0eeac9f110d8eefaefaaf5ef3d73507eec78a530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:08.387350 kubelet[2815]: E0620 19:51:08.386275 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe35a68e0385a21a8af2d8d0eeac9f110d8eefaefaaf5ef3d73507eec78a530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6469cf95fb-cqtcz" Jun 20 19:51:08.387427 kubelet[2815]: E0620 19:51:08.387319 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe35a68e0385a21a8af2d8d0eeac9f110d8eefaefaaf5ef3d73507eec78a530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6469cf95fb-cqtcz" Jun 20 19:51:08.389204 kubelet[2815]: E0620 19:51:08.387866 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6469cf95fb-cqtcz_calico-system(ab90c10e-3dbf-41dc-bad0-77c34086d0f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6469cf95fb-cqtcz_calico-system(ab90c10e-3dbf-41dc-bad0-77c34086d0f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebe35a68e0385a21a8af2d8d0eeac9f110d8eefaefaaf5ef3d73507eec78a530\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6469cf95fb-cqtcz" podUID="ab90c10e-3dbf-41dc-bad0-77c34086d0f4" Jun 20 19:51:09.153326 containerd[1551]: time="2025-06-20T19:51:09.153262619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749bf4dccb-n8f2p,Uid:a4c109fd-2fe5-4963-8e66-cb4e40a83c1d,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:51:09.376021 containerd[1551]: time="2025-06-20T19:51:09.375827823Z" level=error msg="Failed to destroy network for sandbox \"f05d605fca7305cb7b598e4eff6e0ff56de8def402ec54ffa0e65faee48da64e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:09.379425 systemd[1]: run-netns-cni\x2dcddac295\x2d375e\x2d27af\x2d2ad1\x2d40a1e4c5be2a.mount: Deactivated successfully. Jun 20 19:51:09.382673 kubelet[2815]: E0620 19:51:09.379991 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05d605fca7305cb7b598e4eff6e0ff56de8def402ec54ffa0e65faee48da64e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:09.382673 kubelet[2815]: E0620 19:51:09.380076 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05d605fca7305cb7b598e4eff6e0ff56de8def402ec54ffa0e65faee48da64e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749bf4dccb-n8f2p" Jun 20 19:51:09.382673 kubelet[2815]: E0620 19:51:09.380113 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05d605fca7305cb7b598e4eff6e0ff56de8def402ec54ffa0e65faee48da64e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749bf4dccb-n8f2p" Jun 20 19:51:09.383423 containerd[1551]: time="2025-06-20T19:51:09.379551921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749bf4dccb-n8f2p,Uid:a4c109fd-2fe5-4963-8e66-cb4e40a83c1d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05d605fca7305cb7b598e4eff6e0ff56de8def402ec54ffa0e65faee48da64e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:09.383514 kubelet[2815]: E0620 19:51:09.382876 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-749bf4dccb-n8f2p_calico-apiserver(a4c109fd-2fe5-4963-8e66-cb4e40a83c1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-749bf4dccb-n8f2p_calico-apiserver(a4c109fd-2fe5-4963-8e66-cb4e40a83c1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f05d605fca7305cb7b598e4eff6e0ff56de8def402ec54ffa0e65faee48da64e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-749bf4dccb-n8f2p" podUID="a4c109fd-2fe5-4963-8e66-cb4e40a83c1d" Jun 20 19:51:09.862263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277934114.mount: Deactivated successfully. Jun 20 19:51:09.892939 containerd[1551]: time="2025-06-20T19:51:09.892877590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:09.895101 containerd[1551]: time="2025-06-20T19:51:09.895036469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 20 19:51:09.896744 containerd[1551]: time="2025-06-20T19:51:09.896686549Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:09.899284 containerd[1551]: time="2025-06-20T19:51:09.899228420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:09.900124 containerd[1551]: time="2025-06-20T19:51:09.899900216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 12.49393633s" Jun 20 19:51:09.900124 containerd[1551]: time="2025-06-20T19:51:09.899955480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 20 19:51:09.971287 containerd[1551]: time="2025-06-20T19:51:09.970930882Z" level=info msg="CreateContainer within sandbox \"37f23cc3529cd4747ad8f2118c09e0f557310de4a95239d0637eb1f095e9b6bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:51:09.997234 containerd[1551]: time="2025-06-20T19:51:09.995498066Z" level=info msg="Container 6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:10.026373 containerd[1551]: time="2025-06-20T19:51:10.026317162Z" level=info msg="CreateContainer within sandbox \"37f23cc3529cd4747ad8f2118c09e0f557310de4a95239d0637eb1f095e9b6bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\"" Jun 20 19:51:10.028734 containerd[1551]: time="2025-06-20T19:51:10.028593232Z" level=info msg="StartContainer for \"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\"" Jun 20 19:51:10.032376 containerd[1551]: time="2025-06-20T19:51:10.032256737Z" level=info msg="connecting to shim 6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d" address="unix:///run/containerd/s/5b51048ad20e2f105ada92df96cd96c3839dc7e22ce3cf91c32d66b1ddf97b8b" protocol=ttrpc version=3 Jun 20 19:51:10.145757 systemd[1]: Started cri-containerd-6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d.scope - libcontainer container 6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d. Jun 20 19:51:10.150393 containerd[1551]: time="2025-06-20T19:51:10.150343231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqp5r,Uid:0d4d76ec-5329-4cb9-bf25-ffb312bbf65b,Namespace:kube-system,Attempt:0,}" Jun 20 19:51:10.151959 containerd[1551]: time="2025-06-20T19:51:10.151614898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v7lms,Uid:5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc,Namespace:kube-system,Attempt:0,}" Jun 20 19:51:10.151959 containerd[1551]: time="2025-06-20T19:51:10.151703585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-9bxj5,Uid:f93ace87-f2d2-41b7-9607-5b2310ab1ded,Namespace:calico-system,Attempt:0,}" Jun 20 19:51:10.374103 containerd[1551]: time="2025-06-20T19:51:10.374014536Z" level=error msg="Failed to destroy network for sandbox \"e04f28918f448f02b1a1f1f6a50d8fff3cbcf04f8e48366530fb1dbc521a5735\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:10.382570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4248744283.mount: Deactivated successfully. Jun 20 19:51:10.399787 containerd[1551]: time="2025-06-20T19:51:10.398067460Z" level=error msg="Failed to destroy network for sandbox \"b2ea35369e898fee8ec25661058ad0ae2704e9ec3a0c8e449d96f1ed702af1ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:10.403482 systemd[1]: run-netns-cni\x2d339757d2\x2d7c75\x2d402c\x2dffda\x2d0e111d782561.mount: Deactivated successfully. Jun 20 19:51:10.414840 containerd[1551]: time="2025-06-20T19:51:10.414753710Z" level=error msg="Failed to destroy network for sandbox \"545c495c6861e6f68d63d4e81304e81643c3aeec3d8afdfba949f7afce954c72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:10.417829 systemd[1]: run-netns-cni\x2d568cb438\x2dae19\x2dfaa0\x2deaec\x2d4211ef6e196b.mount: Deactivated successfully. Jun 20 19:51:10.610937 containerd[1551]: time="2025-06-20T19:51:10.610772427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-9bxj5,Uid:f93ace87-f2d2-41b7-9607-5b2310ab1ded,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04f28918f448f02b1a1f1f6a50d8fff3cbcf04f8e48366530fb1dbc521a5735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:10.613234 kubelet[2815]: E0620 19:51:10.612469 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04f28918f448f02b1a1f1f6a50d8fff3cbcf04f8e48366530fb1dbc521a5735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:10.613234 kubelet[2815]: E0620 19:51:10.612675 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04f28918f448f02b1a1f1f6a50d8fff3cbcf04f8e48366530fb1dbc521a5735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-9bxj5" Jun 20 19:51:10.613234 kubelet[2815]: E0620 19:51:10.612733 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04f28918f448f02b1a1f1f6a50d8fff3cbcf04f8e48366530fb1dbc521a5735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-9bxj5" Jun 20 19:51:10.614251 kubelet[2815]: E0620 19:51:10.612919 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-9bxj5_calico-system(f93ace87-f2d2-41b7-9607-5b2310ab1ded)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-9bxj5_calico-system(f93ace87-f2d2-41b7-9607-5b2310ab1ded)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e04f28918f448f02b1a1f1f6a50d8fff3cbcf04f8e48366530fb1dbc521a5735\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-9bxj5" podUID="f93ace87-f2d2-41b7-9607-5b2310ab1ded" Jun 20 19:51:10.616222 containerd[1551]: time="2025-06-20T19:51:10.614709487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqp5r,Uid:0d4d76ec-5329-4cb9-bf25-ffb312bbf65b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ea35369e898fee8ec25661058ad0ae2704e9ec3a0c8e449d96f1ed702af1ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:10.618362 containerd[1551]: time="2025-06-20T19:51:10.618247626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v7lms,Uid:5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"545c495c6861e6f68d63d4e81304e81643c3aeec3d8afdfba949f7afce954c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:10.619608 kubelet[2815]: E0620 19:51:10.618832 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"545c495c6861e6f68d63d4e81304e81643c3aeec3d8afdfba949f7afce954c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:10.619608 kubelet[2815]: E0620 19:51:10.618992 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"545c495c6861e6f68d63d4e81304e81643c3aeec3d8afdfba949f7afce954c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v7lms" Jun 20 19:51:10.619608 kubelet[2815]: E0620 19:51:10.619050 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"545c495c6861e6f68d63d4e81304e81643c3aeec3d8afdfba949f7afce954c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v7lms" Jun 20 19:51:10.620124 kubelet[2815]: E0620 19:51:10.619214 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-v7lms_kube-system(5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-v7lms_kube-system(5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"545c495c6861e6f68d63d4e81304e81643c3aeec3d8afdfba949f7afce954c72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v7lms" podUID="5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc" Jun 20 19:51:10.620124 kubelet[2815]: E0620 19:51:10.619319 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ea35369e898fee8ec25661058ad0ae2704e9ec3a0c8e449d96f1ed702af1ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:51:10.620124 kubelet[2815]: E0620 19:51:10.619373 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ea35369e898fee8ec25661058ad0ae2704e9ec3a0c8e449d96f1ed702af1ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fqp5r" Jun 20 19:51:10.620560 kubelet[2815]: E0620 19:51:10.619434 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ea35369e898fee8ec25661058ad0ae2704e9ec3a0c8e449d96f1ed702af1ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fqp5r" Jun 20 19:51:10.620560 kubelet[2815]: E0620 19:51:10.619508 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fqp5r_kube-system(0d4d76ec-5329-4cb9-bf25-ffb312bbf65b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fqp5r_kube-system(0d4d76ec-5329-4cb9-bf25-ffb312bbf65b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2ea35369e898fee8ec25661058ad0ae2704e9ec3a0c8e449d96f1ed702af1ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fqp5r" podUID="0d4d76ec-5329-4cb9-bf25-ffb312bbf65b" Jun 20 19:51:10.629937 containerd[1551]: time="2025-06-20T19:51:10.629695362Z" level=info msg="StartContainer for \"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" returns successfully" Jun 20 19:51:10.830180 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:51:10.830364 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:51:11.072262 kubelet[2815]: I0620 19:51:11.072189 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-whisker-backend-key-pair\") pod \"ab90c10e-3dbf-41dc-bad0-77c34086d0f4\" (UID: \"ab90c10e-3dbf-41dc-bad0-77c34086d0f4\") " Jun 20 19:51:11.072458 kubelet[2815]: I0620 19:51:11.072311 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4k8v\" (UniqueName: \"kubernetes.io/projected/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-kube-api-access-q4k8v\") pod \"ab90c10e-3dbf-41dc-bad0-77c34086d0f4\" (UID: \"ab90c10e-3dbf-41dc-bad0-77c34086d0f4\") " Jun 20 19:51:11.072458 kubelet[2815]: I0620 19:51:11.072342 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-whisker-ca-bundle\") pod \"ab90c10e-3dbf-41dc-bad0-77c34086d0f4\" (UID: \"ab90c10e-3dbf-41dc-bad0-77c34086d0f4\") " Jun 20 19:51:11.072963 kubelet[2815]: I0620 19:51:11.072928 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ab90c10e-3dbf-41dc-bad0-77c34086d0f4" (UID: "ab90c10e-3dbf-41dc-bad0-77c34086d0f4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:51:11.087918 systemd[1]: var-lib-kubelet-pods-ab90c10e\x2d3dbf\x2d41dc\x2dbad0\x2d77c34086d0f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq4k8v.mount: Deactivated successfully. Jun 20 19:51:11.090670 kubelet[2815]: I0620 19:51:11.090528 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-kube-api-access-q4k8v" (OuterVolumeSpecName: "kube-api-access-q4k8v") pod "ab90c10e-3dbf-41dc-bad0-77c34086d0f4" (UID: "ab90c10e-3dbf-41dc-bad0-77c34086d0f4"). InnerVolumeSpecName "kube-api-access-q4k8v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:51:11.093283 kubelet[2815]: I0620 19:51:11.092928 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ab90c10e-3dbf-41dc-bad0-77c34086d0f4" (UID: "ab90c10e-3dbf-41dc-bad0-77c34086d0f4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:51:11.093810 systemd[1]: var-lib-kubelet-pods-ab90c10e\x2d3dbf\x2d41dc\x2dbad0\x2d77c34086d0f4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:51:11.150124 containerd[1551]: time="2025-06-20T19:51:11.150051384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749bf4dccb-4wrqp,Uid:31d1e1de-5d04-4fb1-a1dd-f2993de9970d,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:51:11.151845 containerd[1551]: time="2025-06-20T19:51:11.151795340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567b8bf998-7r74w,Uid:5b042ef5-21b7-44b2-9e1b-65fc81686302,Namespace:calico-system,Attempt:0,}" Jun 20 19:51:11.154816 containerd[1551]: time="2025-06-20T19:51:11.154777351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bd6d945d-6pq8f,Uid:089eb096-a391-43eb-8096-57fcbb4ee864,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:51:11.155008 containerd[1551]: time="2025-06-20T19:51:11.154913207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5ldnw,Uid:c9d6a569-963e-4451-b36f-587404b621dd,Namespace:calico-system,Attempt:0,}" Jun 20 19:51:11.172045 systemd[1]: Removed slice kubepods-besteffort-podab90c10e_3dbf_41dc_bad0_77c34086d0f4.slice - libcontainer container kubepods-besteffort-podab90c10e_3dbf_41dc_bad0_77c34086d0f4.slice. Jun 20 19:51:11.175049 kubelet[2815]: I0620 19:51:11.175005 2815 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-whisker-backend-key-pair\") on node \"ci-4344-1-0-0-4524070979.novalocal\" DevicePath \"\"" Jun 20 19:51:11.175049 kubelet[2815]: I0620 19:51:11.175037 2815 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4k8v\" (UniqueName: \"kubernetes.io/projected/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-kube-api-access-q4k8v\") on node \"ci-4344-1-0-0-4524070979.novalocal\" DevicePath \"\"" Jun 20 19:51:11.175049 kubelet[2815]: I0620 19:51:11.175050 2815 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab90c10e-3dbf-41dc-bad0-77c34086d0f4-whisker-ca-bundle\") on node \"ci-4344-1-0-0-4524070979.novalocal\" DevicePath \"\"" Jun 20 19:51:11.562580 systemd-networkd[1443]: calid6ebabdc44b: Link UP Jun 20 19:51:11.564732 systemd-networkd[1443]: calid6ebabdc44b: Gained carrier Jun 20 19:51:11.598515 containerd[1551]: 2025-06-20 19:51:11.255 [INFO][4051] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:51:11.598515 containerd[1551]: 2025-06-20 19:51:11.388 [INFO][4051] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0 calico-kube-controllers-567b8bf998- calico-system 5b042ef5-21b7-44b2-9e1b-65fc81686302 848 0 2025-06-20 19:50:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:567b8bf998 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344-1-0-0-4524070979.novalocal calico-kube-controllers-567b8bf998-7r74w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid6ebabdc44b [] [] }} ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Namespace="calico-system" Pod="calico-kube-controllers-567b8bf998-7r74w" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-" Jun 20 19:51:11.598515 containerd[1551]: 2025-06-20 19:51:11.389 [INFO][4051] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Namespace="calico-system" Pod="calico-kube-controllers-567b8bf998-7r74w" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0" Jun 20 19:51:11.598515 containerd[1551]: 2025-06-20 19:51:11.452 [INFO][4116] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" HandleID="k8s-pod-network.3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0" Jun 20 19:51:11.600101 containerd[1551]: 2025-06-20 19:51:11.453 [INFO][4116] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" HandleID="k8s-pod-network.3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fd80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-0-0-4524070979.novalocal", "pod":"calico-kube-controllers-567b8bf998-7r74w", "timestamp":"2025-06-20 19:51:11.452857435 +0000 UTC"}, Hostname:"ci-4344-1-0-0-4524070979.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:51:11.600101 containerd[1551]: 2025-06-20 19:51:11.453 [INFO][4116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:11.600101 containerd[1551]: 2025-06-20 19:51:11.453 [INFO][4116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:11.600101 containerd[1551]: 2025-06-20 19:51:11.453 [INFO][4116] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-0-4524070979.novalocal' Jun 20 19:51:11.600101 containerd[1551]: 2025-06-20 19:51:11.479 [INFO][4116] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.600101 containerd[1551]: 2025-06-20 19:51:11.496 [INFO][4116] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.600101 containerd[1551]: 2025-06-20 19:51:11.508 [INFO][4116] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.600101 containerd[1551]: 2025-06-20 19:51:11.510 [INFO][4116] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.600101 containerd[1551]: 2025-06-20 19:51:11.514 [INFO][4116] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.600812 containerd[1551]: 2025-06-20 19:51:11.514 [INFO][4116] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.600812 containerd[1551]: 2025-06-20 19:51:11.516 [INFO][4116] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788 Jun 20 19:51:11.600812 containerd[1551]: 2025-06-20 19:51:11.524 [INFO][4116] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.600812 containerd[1551]: 2025-06-20 19:51:11.533 [INFO][4116] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.193/26] block=192.168.47.192/26 handle="k8s-pod-network.3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.600812 containerd[1551]: 2025-06-20 19:51:11.533 [INFO][4116] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.193/26] handle="k8s-pod-network.3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.600812 containerd[1551]: 2025-06-20 19:51:11.533 [INFO][4116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:11.600812 containerd[1551]: 2025-06-20 19:51:11.533 [INFO][4116] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.193/26] IPv6=[] ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" HandleID="k8s-pod-network.3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0" Jun 20 19:51:11.601110 containerd[1551]: 2025-06-20 19:51:11.542 [INFO][4051] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Namespace="calico-system" Pod="calico-kube-controllers-567b8bf998-7r74w" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0", GenerateName:"calico-kube-controllers-567b8bf998-", Namespace:"calico-system", SelfLink:"", UID:"5b042ef5-21b7-44b2-9e1b-65fc81686302", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567b8bf998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"", Pod:"calico-kube-controllers-567b8bf998-7r74w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6ebabdc44b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:11.602462 containerd[1551]: 2025-06-20 19:51:11.542 [INFO][4051] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.193/32] ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Namespace="calico-system" Pod="calico-kube-controllers-567b8bf998-7r74w" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0" Jun 20 19:51:11.602462 containerd[1551]: 2025-06-20 19:51:11.542 [INFO][4051] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6ebabdc44b ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Namespace="calico-system" Pod="calico-kube-controllers-567b8bf998-7r74w" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0" Jun 20 19:51:11.602462 containerd[1551]: 2025-06-20 19:51:11.568 [INFO][4051] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Namespace="calico-system" Pod="calico-kube-controllers-567b8bf998-7r74w" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0" Jun 20 19:51:11.602646 containerd[1551]: 2025-06-20 19:51:11.569 [INFO][4051] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Namespace="calico-system" Pod="calico-kube-controllers-567b8bf998-7r74w" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0", GenerateName:"calico-kube-controllers-567b8bf998-", Namespace:"calico-system", SelfLink:"", UID:"5b042ef5-21b7-44b2-9e1b-65fc81686302", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567b8bf998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788", Pod:"calico-kube-controllers-567b8bf998-7r74w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6ebabdc44b", MAC:"fe:78:17:98:32:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:11.602842 containerd[1551]: 2025-06-20 19:51:11.594 [INFO][4051] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" Namespace="calico-system" Pod="calico-kube-controllers-567b8bf998-7r74w" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--kube--controllers--567b8bf998--7r74w-eth0" Jun 20 19:51:11.720511 systemd-networkd[1443]: cali38edeeb23dc: Link UP Jun 20 19:51:11.722179 systemd-networkd[1443]: cali38edeeb23dc: Gained carrier Jun 20 19:51:11.764067 kubelet[2815]: I0620 19:51:11.762245 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tz88k" podStartSLOduration=2.939523613 podStartE2EDuration="31.761272929s" podCreationTimestamp="2025-06-20 19:50:40 +0000 UTC" firstStartedPulling="2025-06-20 19:50:41.079654954 +0000 UTC m=+22.094839714" lastFinishedPulling="2025-06-20 19:51:09.901404281 +0000 UTC m=+50.916589030" observedRunningTime="2025-06-20 19:51:11.758662007 +0000 UTC m=+52.773846766" watchObservedRunningTime="2025-06-20 19:51:11.761272929 +0000 UTC m=+52.776457738" Jun 20 19:51:11.773727 containerd[1551]: time="2025-06-20T19:51:11.773676966Z" level=info msg="connecting to shim 3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788" address="unix:///run/containerd/s/72237f0b44b2d9b8f5fb5f6b0fe9753fe137d2e2eb377784fd4707aece795903" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:51:11.778374 containerd[1551]: 2025-06-20 19:51:11.301 [INFO][4068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:51:11.778374 containerd[1551]: 2025-06-20 19:51:11.388 [INFO][4068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0 calico-apiserver-56bd6d945d- calico-apiserver 089eb096-a391-43eb-8096-57fcbb4ee864 847 0 2025-06-20 19:50:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56bd6d945d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-1-0-0-4524070979.novalocal calico-apiserver-56bd6d945d-6pq8f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali38edeeb23dc [] [] }} ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-6pq8f" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-" Jun 20 19:51:11.778374 containerd[1551]: 2025-06-20 19:51:11.388 [INFO][4068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-6pq8f" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0" Jun 20 19:51:11.778374 containerd[1551]: 2025-06-20 19:51:11.496 [INFO][4111] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" HandleID="k8s-pod-network.1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0" Jun 20 19:51:11.778666 containerd[1551]: 2025-06-20 19:51:11.496 [INFO][4111] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" HandleID="k8s-pod-network.1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-1-0-0-4524070979.novalocal", "pod":"calico-apiserver-56bd6d945d-6pq8f", "timestamp":"2025-06-20 19:51:11.496349205 +0000 UTC"}, Hostname:"ci-4344-1-0-0-4524070979.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:51:11.778666 containerd[1551]: 2025-06-20 19:51:11.497 [INFO][4111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:11.778666 containerd[1551]: 2025-06-20 19:51:11.533 [INFO][4111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:11.778666 containerd[1551]: 2025-06-20 19:51:11.533 [INFO][4111] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-0-4524070979.novalocal' Jun 20 19:51:11.778666 containerd[1551]: 2025-06-20 19:51:11.578 [INFO][4111] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.778666 containerd[1551]: 2025-06-20 19:51:11.609 [INFO][4111] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.778666 containerd[1551]: 2025-06-20 19:51:11.617 [INFO][4111] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.778666 containerd[1551]: 2025-06-20 19:51:11.623 [INFO][4111] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.778666 containerd[1551]: 2025-06-20 19:51:11.630 [INFO][4111] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.778998 containerd[1551]: 2025-06-20 19:51:11.630 [INFO][4111] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.778998 containerd[1551]: 2025-06-20 19:51:11.633 [INFO][4111] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254 Jun 20 19:51:11.778998 containerd[1551]: 2025-06-20 19:51:11.650 [INFO][4111] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.778998 containerd[1551]: 2025-06-20 19:51:11.681 [INFO][4111] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.194/26] block=192.168.47.192/26 handle="k8s-pod-network.1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.778998 containerd[1551]: 2025-06-20 19:51:11.682 [INFO][4111] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.194/26] handle="k8s-pod-network.1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.778998 containerd[1551]: 2025-06-20 19:51:11.682 [INFO][4111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:11.778998 containerd[1551]: 2025-06-20 19:51:11.682 [INFO][4111] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.194/26] IPv6=[] ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" HandleID="k8s-pod-network.1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0" Jun 20 19:51:11.779763 containerd[1551]: 2025-06-20 19:51:11.694 [INFO][4068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-6pq8f" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0", GenerateName:"calico-apiserver-56bd6d945d-", Namespace:"calico-apiserver", SelfLink:"", UID:"089eb096-a391-43eb-8096-57fcbb4ee864", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bd6d945d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"", Pod:"calico-apiserver-56bd6d945d-6pq8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38edeeb23dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:11.779851 containerd[1551]: 2025-06-20 19:51:11.715 [INFO][4068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.194/32] ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-6pq8f" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0" Jun 20 19:51:11.779851 containerd[1551]: 2025-06-20 19:51:11.715 [INFO][4068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38edeeb23dc ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-6pq8f" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0" Jun 20 19:51:11.779851 containerd[1551]: 2025-06-20 19:51:11.723 [INFO][4068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-6pq8f" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0" Jun 20 19:51:11.780000 containerd[1551]: 2025-06-20 19:51:11.730 [INFO][4068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-6pq8f" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0", GenerateName:"calico-apiserver-56bd6d945d-", Namespace:"calico-apiserver", SelfLink:"", UID:"089eb096-a391-43eb-8096-57fcbb4ee864", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bd6d945d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254", Pod:"calico-apiserver-56bd6d945d-6pq8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38edeeb23dc", MAC:"da:fa:21:9f:f4:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:11.780079 containerd[1551]: 2025-06-20 19:51:11.767 [INFO][4068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-6pq8f" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--6pq8f-eth0" Jun 20 19:51:11.834812 systemd[1]: Started cri-containerd-3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788.scope - libcontainer container 3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788. Jun 20 19:51:11.863433 containerd[1551]: time="2025-06-20T19:51:11.863377920Z" level=info msg="connecting to shim 1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254" address="unix:///run/containerd/s/0f66bf4ba947531db1b8ad836a077c4158d40968459a5067000e7fa28cf01c7e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:51:11.914289 systemd-networkd[1443]: calie3bb2900ef3: Link UP Jun 20 19:51:11.916497 systemd-networkd[1443]: calie3bb2900ef3: Gained carrier Jun 20 19:51:11.951465 systemd[1]: Started cri-containerd-1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254.scope - libcontainer container 1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254. Jun 20 19:51:11.968006 systemd[1]: Created slice kubepods-besteffort-pod9acd3075_6fc4_4d75_b6eb_10df66e1aebd.slice - libcontainer container kubepods-besteffort-pod9acd3075_6fc4_4d75_b6eb_10df66e1aebd.slice. Jun 20 19:51:11.985333 containerd[1551]: 2025-06-20 19:51:11.302 [INFO][4060] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:51:11.985333 containerd[1551]: 2025-06-20 19:51:11.388 [INFO][4060] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0 calico-apiserver-749bf4dccb- calico-apiserver 31d1e1de-5d04-4fb1-a1dd-f2993de9970d 845 0 2025-06-20 19:50:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:749bf4dccb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-1-0-0-4524070979.novalocal calico-apiserver-749bf4dccb-4wrqp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie3bb2900ef3 [] [] }} ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-4wrqp" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-" Jun 20 19:51:11.985333 containerd[1551]: 2025-06-20 19:51:11.388 [INFO][4060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-4wrqp" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:51:11.985333 containerd[1551]: 2025-06-20 19:51:11.498 [INFO][4113] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:51:11.985876 containerd[1551]: 2025-06-20 19:51:11.498 [INFO][4113] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-1-0-0-4524070979.novalocal", "pod":"calico-apiserver-749bf4dccb-4wrqp", "timestamp":"2025-06-20 19:51:11.498083033 +0000 UTC"}, Hostname:"ci-4344-1-0-0-4524070979.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:51:11.985876 containerd[1551]: 2025-06-20 19:51:11.499 [INFO][4113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:11.985876 containerd[1551]: 2025-06-20 19:51:11.682 [INFO][4113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:11.985876 containerd[1551]: 2025-06-20 19:51:11.682 [INFO][4113] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-0-4524070979.novalocal' Jun 20 19:51:11.985876 containerd[1551]: 2025-06-20 19:51:11.717 [INFO][4113] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.985876 containerd[1551]: 2025-06-20 19:51:11.765 [INFO][4113] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.985876 containerd[1551]: 2025-06-20 19:51:11.783 [INFO][4113] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.985876 containerd[1551]: 2025-06-20 19:51:11.798 [INFO][4113] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.985876 containerd[1551]: 2025-06-20 19:51:11.813 [INFO][4113] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.986531 containerd[1551]: 2025-06-20 19:51:11.813 [INFO][4113] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.986531 containerd[1551]: 2025-06-20 19:51:11.825 [INFO][4113] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd Jun 20 19:51:11.986531 containerd[1551]: 2025-06-20 19:51:11.852 [INFO][4113] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.986531 containerd[1551]: 2025-06-20 19:51:11.889 [INFO][4113] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.195/26] block=192.168.47.192/26 handle="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.986531 containerd[1551]: 2025-06-20 19:51:11.889 [INFO][4113] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.195/26] handle="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:11.986531 containerd[1551]: 2025-06-20 19:51:11.889 [INFO][4113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:11.986531 containerd[1551]: 2025-06-20 19:51:11.889 [INFO][4113] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.195/26] IPv6=[] ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:51:11.986712 containerd[1551]: 2025-06-20 19:51:11.897 [INFO][4060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-4wrqp" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0", GenerateName:"calico-apiserver-749bf4dccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"31d1e1de-5d04-4fb1-a1dd-f2993de9970d", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749bf4dccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"", Pod:"calico-apiserver-749bf4dccb-4wrqp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3bb2900ef3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:11.986788 containerd[1551]: 2025-06-20 19:51:11.900 [INFO][4060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.195/32] ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-4wrqp" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:51:11.986788 containerd[1551]: 2025-06-20 19:51:11.902 [INFO][4060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3bb2900ef3 ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-4wrqp" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:51:11.986788 containerd[1551]: 2025-06-20 19:51:11.918 [INFO][4060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-4wrqp" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:51:11.986869 containerd[1551]: 2025-06-20 19:51:11.923 [INFO][4060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-4wrqp" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0", GenerateName:"calico-apiserver-749bf4dccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"31d1e1de-5d04-4fb1-a1dd-f2993de9970d", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749bf4dccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd", Pod:"calico-apiserver-749bf4dccb-4wrqp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3bb2900ef3", MAC:"de:04:ca:29:e3:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:11.986936 containerd[1551]: 2025-06-20 19:51:11.981 [INFO][4060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-4wrqp" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:51:11.990734 kubelet[2815]: I0620 19:51:11.990663 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwdll\" (UniqueName: \"kubernetes.io/projected/9acd3075-6fc4-4d75-b6eb-10df66e1aebd-kube-api-access-cwdll\") pod \"whisker-8675f76d8f-fbgz6\" (UID: \"9acd3075-6fc4-4d75-b6eb-10df66e1aebd\") " pod="calico-system/whisker-8675f76d8f-fbgz6" Jun 20 19:51:11.990886 kubelet[2815]: I0620 19:51:11.990744 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9acd3075-6fc4-4d75-b6eb-10df66e1aebd-whisker-ca-bundle\") pod \"whisker-8675f76d8f-fbgz6\" (UID: \"9acd3075-6fc4-4d75-b6eb-10df66e1aebd\") " pod="calico-system/whisker-8675f76d8f-fbgz6" Jun 20 19:51:11.990886 kubelet[2815]: I0620 19:51:11.990813 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9acd3075-6fc4-4d75-b6eb-10df66e1aebd-whisker-backend-key-pair\") pod \"whisker-8675f76d8f-fbgz6\" (UID: \"9acd3075-6fc4-4d75-b6eb-10df66e1aebd\") " pod="calico-system/whisker-8675f76d8f-fbgz6" Jun 20 19:51:12.049455 containerd[1551]: time="2025-06-20T19:51:12.049375367Z" level=info msg="connecting to shim 5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" address="unix:///run/containerd/s/51c2587872949a9d5e11249b42968b4bbe0182f9ce7c8222114a2c0a2e037142" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:51:12.092356 systemd-networkd[1443]: calidee965cedef: Link UP Jun 20 19:51:12.095434 systemd-networkd[1443]: calidee965cedef: Gained carrier Jun 20 19:51:12.130779 systemd[1]: Started cri-containerd-5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd.scope - libcontainer container 5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd. Jun 20 19:51:12.143387 containerd[1551]: 2025-06-20 19:51:11.285 [INFO][4071] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:51:12.143387 containerd[1551]: 2025-06-20 19:51:11.393 [INFO][4071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0 csi-node-driver- calico-system c9d6a569-963e-4451-b36f-587404b621dd 722 0 2025-06-20 19:50:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344-1-0-0-4524070979.novalocal csi-node-driver-5ldnw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidee965cedef [] [] }} ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Namespace="calico-system" Pod="csi-node-driver-5ldnw" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-" Jun 20 19:51:12.143387 containerd[1551]: 2025-06-20 19:51:11.393 [INFO][4071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Namespace="calico-system" Pod="csi-node-driver-5ldnw" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0" Jun 20 19:51:12.143387 containerd[1551]: 2025-06-20 19:51:11.500 [INFO][4117] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" HandleID="k8s-pod-network.d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0" Jun 20 19:51:12.143700 containerd[1551]: 2025-06-20 19:51:11.501 [INFO][4117] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" HandleID="k8s-pod-network.d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-0-0-4524070979.novalocal", "pod":"csi-node-driver-5ldnw", "timestamp":"2025-06-20 19:51:11.500138288 +0000 UTC"}, Hostname:"ci-4344-1-0-0-4524070979.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:51:12.143700 containerd[1551]: 2025-06-20 19:51:11.501 [INFO][4117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:12.143700 containerd[1551]: 2025-06-20 19:51:11.892 [INFO][4117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:12.143700 containerd[1551]: 2025-06-20 19:51:11.892 [INFO][4117] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-0-4524070979.novalocal' Jun 20 19:51:12.143700 containerd[1551]: 2025-06-20 19:51:11.993 [INFO][4117] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.143700 containerd[1551]: 2025-06-20 19:51:12.019 [INFO][4117] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.143700 containerd[1551]: 2025-06-20 19:51:12.037 [INFO][4117] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.143700 containerd[1551]: 2025-06-20 19:51:12.042 [INFO][4117] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.143700 containerd[1551]: 2025-06-20 19:51:12.047 [INFO][4117] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.143976 containerd[1551]: 2025-06-20 19:51:12.047 [INFO][4117] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.143976 containerd[1551]: 2025-06-20 19:51:12.051 [INFO][4117] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d Jun 20 19:51:12.143976 containerd[1551]: 2025-06-20 19:51:12.059 [INFO][4117] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.143976 containerd[1551]: 2025-06-20 19:51:12.079 [INFO][4117] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.196/26] block=192.168.47.192/26 handle="k8s-pod-network.d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.143976 containerd[1551]: 2025-06-20 19:51:12.079 [INFO][4117] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.196/26] handle="k8s-pod-network.d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.143976 containerd[1551]: 2025-06-20 19:51:12.079 [INFO][4117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:12.143976 containerd[1551]: 2025-06-20 19:51:12.079 [INFO][4117] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.196/26] IPv6=[] ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" HandleID="k8s-pod-network.d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0" Jun 20 19:51:12.144148 containerd[1551]: 2025-06-20 19:51:12.083 [INFO][4071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Namespace="calico-system" Pod="csi-node-driver-5ldnw" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9d6a569-963e-4451-b36f-587404b621dd", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"", Pod:"csi-node-driver-5ldnw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidee965cedef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:12.145126 containerd[1551]: 2025-06-20 19:51:12.083 [INFO][4071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.196/32] ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Namespace="calico-system" Pod="csi-node-driver-5ldnw" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0" Jun 20 19:51:12.145126 containerd[1551]: 2025-06-20 19:51:12.083 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidee965cedef ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Namespace="calico-system" Pod="csi-node-driver-5ldnw" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0" Jun 20 19:51:12.145126 containerd[1551]: 2025-06-20 19:51:12.097 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Namespace="calico-system" Pod="csi-node-driver-5ldnw" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0" Jun 20 19:51:12.145277 containerd[1551]: 2025-06-20 19:51:12.110 [INFO][4071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Namespace="calico-system" Pod="csi-node-driver-5ldnw" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9d6a569-963e-4451-b36f-587404b621dd", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d", Pod:"csi-node-driver-5ldnw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidee965cedef", MAC:"ce:7a:e7:ec:f2:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:12.145351 containerd[1551]: 2025-06-20 19:51:12.130 [INFO][4071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" Namespace="calico-system" Pod="csi-node-driver-5ldnw" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-csi--node--driver--5ldnw-eth0" Jun 20 19:51:12.156373 containerd[1551]: time="2025-06-20T19:51:12.156089314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567b8bf998-7r74w,Uid:5b042ef5-21b7-44b2-9e1b-65fc81686302,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788\"" Jun 20 19:51:12.161409 containerd[1551]: time="2025-06-20T19:51:12.161336623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:51:12.205440 containerd[1551]: time="2025-06-20T19:51:12.205148116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bd6d945d-6pq8f,Uid:089eb096-a391-43eb-8096-57fcbb4ee864,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254\"" Jun 20 19:51:12.206530 containerd[1551]: time="2025-06-20T19:51:12.205938406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" id:\"cb5b81c4c519444597c6d9491500a9757093f23d30741d49bb6a90ffaa8c52e8\" pid:4179 exit_status:1 exited_at:{seconds:1750449072 nanos:204527427}" Jun 20 19:51:12.242274 containerd[1551]: time="2025-06-20T19:51:12.242110380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749bf4dccb-4wrqp,Uid:31d1e1de-5d04-4fb1-a1dd-f2993de9970d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\"" Jun 20 19:51:12.254785 containerd[1551]: time="2025-06-20T19:51:12.254725006Z" level=info msg="connecting to shim d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d" address="unix:///run/containerd/s/925e1a48b3c41aaac75320c13be5d28b57a549ad3d9ce64efad364e9cedc04cf" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:51:12.282956 containerd[1551]: time="2025-06-20T19:51:12.282534479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8675f76d8f-fbgz6,Uid:9acd3075-6fc4-4d75-b6eb-10df66e1aebd,Namespace:calico-system,Attempt:0,}" Jun 20 19:51:12.290496 systemd[1]: Started cri-containerd-d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d.scope - libcontainer container d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d. Jun 20 19:51:12.377994 containerd[1551]: time="2025-06-20T19:51:12.377661788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5ldnw,Uid:c9d6a569-963e-4451-b36f-587404b621dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d\"" Jun 20 19:51:12.548604 systemd-networkd[1443]: calic16c336d68a: Link UP Jun 20 19:51:12.548838 systemd-networkd[1443]: calic16c336d68a: Gained carrier Jun 20 19:51:12.570828 containerd[1551]: 2025-06-20 19:51:12.349 [INFO][4368] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:51:12.570828 containerd[1551]: 2025-06-20 19:51:12.403 [INFO][4368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0 whisker-8675f76d8f- calico-system 9acd3075-6fc4-4d75-b6eb-10df66e1aebd 949 0 2025-06-20 19:51:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8675f76d8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344-1-0-0-4524070979.novalocal whisker-8675f76d8f-fbgz6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic16c336d68a [] [] }} ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Namespace="calico-system" Pod="whisker-8675f76d8f-fbgz6" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-" Jun 20 19:51:12.570828 containerd[1551]: 2025-06-20 19:51:12.403 [INFO][4368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Namespace="calico-system" Pod="whisker-8675f76d8f-fbgz6" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0" Jun 20 19:51:12.570828 containerd[1551]: 2025-06-20 19:51:12.476 [INFO][4406] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" HandleID="k8s-pod-network.c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0" Jun 20 19:51:12.571505 containerd[1551]: 2025-06-20 19:51:12.477 [INFO][4406] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" HandleID="k8s-pod-network.c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-0-0-4524070979.novalocal", "pod":"whisker-8675f76d8f-fbgz6", "timestamp":"2025-06-20 19:51:12.476823592 +0000 UTC"}, Hostname:"ci-4344-1-0-0-4524070979.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:51:12.571505 containerd[1551]: 2025-06-20 19:51:12.477 [INFO][4406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:12.571505 containerd[1551]: 2025-06-20 19:51:12.477 [INFO][4406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:12.571505 containerd[1551]: 2025-06-20 19:51:12.478 [INFO][4406] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-0-4524070979.novalocal' Jun 20 19:51:12.571505 containerd[1551]: 2025-06-20 19:51:12.492 [INFO][4406] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.571505 containerd[1551]: 2025-06-20 19:51:12.501 [INFO][4406] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.571505 containerd[1551]: 2025-06-20 19:51:12.512 [INFO][4406] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.571505 containerd[1551]: 2025-06-20 19:51:12.515 [INFO][4406] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.571505 containerd[1551]: 2025-06-20 19:51:12.520 [INFO][4406] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.571953 containerd[1551]: 2025-06-20 19:51:12.520 [INFO][4406] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.571953 containerd[1551]: 2025-06-20 19:51:12.524 [INFO][4406] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014 Jun 20 19:51:12.571953 containerd[1551]: 2025-06-20 19:51:12.529 [INFO][4406] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.571953 containerd[1551]: 2025-06-20 19:51:12.538 [INFO][4406] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.197/26] block=192.168.47.192/26 handle="k8s-pod-network.c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.571953 containerd[1551]: 2025-06-20 19:51:12.538 [INFO][4406] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.197/26] handle="k8s-pod-network.c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:12.571953 containerd[1551]: 2025-06-20 19:51:12.538 [INFO][4406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:12.571953 containerd[1551]: 2025-06-20 19:51:12.538 [INFO][4406] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.197/26] IPv6=[] ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" HandleID="k8s-pod-network.c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0" Jun 20 19:51:12.573666 containerd[1551]: 2025-06-20 19:51:12.542 [INFO][4368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Namespace="calico-system" Pod="whisker-8675f76d8f-fbgz6" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0", GenerateName:"whisker-8675f76d8f-", Namespace:"calico-system", SelfLink:"", UID:"9acd3075-6fc4-4d75-b6eb-10df66e1aebd", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 51, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8675f76d8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"", Pod:"whisker-8675f76d8f-fbgz6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic16c336d68a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:12.573777 containerd[1551]: 2025-06-20 19:51:12.542 [INFO][4368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.197/32] ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Namespace="calico-system" Pod="whisker-8675f76d8f-fbgz6" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0" Jun 20 19:51:12.573777 containerd[1551]: 2025-06-20 19:51:12.542 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic16c336d68a ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Namespace="calico-system" Pod="whisker-8675f76d8f-fbgz6" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0" Jun 20 19:51:12.573777 containerd[1551]: 2025-06-20 19:51:12.545 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Namespace="calico-system" Pod="whisker-8675f76d8f-fbgz6" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0" Jun 20 19:51:12.573876 containerd[1551]: 2025-06-20 19:51:12.546 [INFO][4368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Namespace="calico-system" Pod="whisker-8675f76d8f-fbgz6" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0", GenerateName:"whisker-8675f76d8f-", Namespace:"calico-system", SelfLink:"", UID:"9acd3075-6fc4-4d75-b6eb-10df66e1aebd", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 51, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8675f76d8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014", Pod:"whisker-8675f76d8f-fbgz6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic16c336d68a", MAC:"5a:f5:51:78:f5:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:12.573973 containerd[1551]: 2025-06-20 19:51:12.566 [INFO][4368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" Namespace="calico-system" Pod="whisker-8675f76d8f-fbgz6" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-whisker--8675f76d8f--fbgz6-eth0" Jun 20 19:51:12.629345 containerd[1551]: time="2025-06-20T19:51:12.628206638Z" level=info msg="connecting to shim c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014" address="unix:///run/containerd/s/48568cbf66f6fb64e95fee85e80a1e1b21c82c29037b622e85847ca98e1001e1" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:51:12.736404 systemd[1]: Started cri-containerd-c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014.scope - libcontainer container c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014. Jun 20 19:51:12.860464 containerd[1551]: time="2025-06-20T19:51:12.860345708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8675f76d8f-fbgz6,Uid:9acd3075-6fc4-4d75-b6eb-10df66e1aebd,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014\"" Jun 20 19:51:13.004428 systemd-networkd[1443]: calid6ebabdc44b: Gained IPv6LL Jun 20 19:51:13.153449 kubelet[2815]: I0620 19:51:13.153395 2815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab90c10e-3dbf-41dc-bad0-77c34086d0f4" path="/var/lib/kubelet/pods/ab90c10e-3dbf-41dc-bad0-77c34086d0f4/volumes" Jun 20 19:51:13.201135 containerd[1551]: time="2025-06-20T19:51:13.201085809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" id:\"ac144fca63d417f6e19029b9061f0d47ed841f05910ac9c3f361356d93fadf8d\" pid:4538 exit_status:1 exited_at:{seconds:1750449073 nanos:200562632}" Jun 20 19:51:13.387641 systemd-networkd[1443]: calidee965cedef: Gained IPv6LL Jun 20 19:51:13.451913 systemd-networkd[1443]: cali38edeeb23dc: Gained IPv6LL Jun 20 19:51:13.580566 systemd-networkd[1443]: vxlan.calico: Link UP Jun 20 19:51:13.580576 systemd-networkd[1443]: vxlan.calico: Gained carrier Jun 20 19:51:13.963385 systemd-networkd[1443]: calie3bb2900ef3: Gained IPv6LL Jun 20 19:51:14.539392 systemd-networkd[1443]: calic16c336d68a: Gained IPv6LL Jun 20 19:51:15.179632 systemd-networkd[1443]: vxlan.calico: Gained IPv6LL Jun 20 19:51:19.152090 containerd[1551]: time="2025-06-20T19:51:19.151829263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:19.160124 containerd[1551]: time="2025-06-20T19:51:19.159872021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 20 19:51:19.167766 containerd[1551]: time="2025-06-20T19:51:19.167470031Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:19.188418 containerd[1551]: time="2025-06-20T19:51:19.188245636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:19.192254 containerd[1551]: time="2025-06-20T19:51:19.191513395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 7.030003052s" Jun 20 19:51:19.192254 containerd[1551]: time="2025-06-20T19:51:19.191717670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 20 19:51:19.195955 containerd[1551]: time="2025-06-20T19:51:19.195145510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:51:19.251060 containerd[1551]: time="2025-06-20T19:51:19.251007901Z" level=info msg="CreateContainer within sandbox \"3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:51:19.269931 containerd[1551]: time="2025-06-20T19:51:19.268988128Z" level=info msg="Container 7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:19.295492 containerd[1551]: time="2025-06-20T19:51:19.295386530Z" level=info msg="CreateContainer within sandbox \"3f5f4dc0b808e33602598965751c0bb511db2cc38409aa4596b317aa28ec5788\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\"" Jun 20 19:51:19.297406 containerd[1551]: time="2025-06-20T19:51:19.296967539Z" level=info msg="StartContainer for \"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\"" Jun 20 19:51:19.303268 containerd[1551]: time="2025-06-20T19:51:19.302821241Z" level=info msg="connecting to shim 7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734" address="unix:///run/containerd/s/72237f0b44b2d9b8f5fb5f6b0fe9753fe137d2e2eb377784fd4707aece795903" protocol=ttrpc version=3 Jun 20 19:51:19.340754 systemd[1]: Started cri-containerd-7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734.scope - libcontainer container 7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734. Jun 20 19:51:19.424805 containerd[1551]: time="2025-06-20T19:51:19.424544158Z" level=info msg="StartContainer for \"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" returns successfully" Jun 20 19:51:19.801698 containerd[1551]: time="2025-06-20T19:51:19.801631690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"24c66651351146f136df9f71a36424ec9fbc3ac479d7d43166c91d9c0042c5c8\" pid:4741 exited_at:{seconds:1750449079 nanos:800702429}" Jun 20 19:51:19.898457 kubelet[2815]: I0620 19:51:19.898241 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-567b8bf998-7r74w" podStartSLOduration=31.864760048 podStartE2EDuration="38.898207914s" podCreationTimestamp="2025-06-20 19:50:41 +0000 UTC" firstStartedPulling="2025-06-20 19:51:12.160908086 +0000 UTC m=+53.176092845" lastFinishedPulling="2025-06-20 19:51:19.194355912 +0000 UTC m=+60.209540711" observedRunningTime="2025-06-20 19:51:19.737040497 +0000 UTC m=+60.752225326" watchObservedRunningTime="2025-06-20 19:51:19.898207914 +0000 UTC m=+60.913392673" Jun 20 19:51:21.153646 containerd[1551]: time="2025-06-20T19:51:21.152230038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-9bxj5,Uid:f93ace87-f2d2-41b7-9607-5b2310ab1ded,Namespace:calico-system,Attempt:0,}" Jun 20 19:51:21.412755 systemd-networkd[1443]: cali95aedb5b95d: Link UP Jun 20 19:51:21.415395 systemd-networkd[1443]: cali95aedb5b95d: Gained carrier Jun 20 19:51:21.457265 containerd[1551]: 2025-06-20 19:51:21.267 [INFO][4750] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0 goldmane-5bd85449d4- calico-system f93ace87-f2d2-41b7-9607-5b2310ab1ded 851 0 2025-06-20 19:50:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344-1-0-0-4524070979.novalocal goldmane-5bd85449d4-9bxj5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali95aedb5b95d [] [] }} ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Namespace="calico-system" Pod="goldmane-5bd85449d4-9bxj5" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-" Jun 20 19:51:21.457265 containerd[1551]: 2025-06-20 19:51:21.268 [INFO][4750] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Namespace="calico-system" Pod="goldmane-5bd85449d4-9bxj5" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0" Jun 20 19:51:21.457265 containerd[1551]: 2025-06-20 19:51:21.319 [INFO][4763] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" HandleID="k8s-pod-network.38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0" Jun 20 19:51:21.457617 containerd[1551]: 2025-06-20 19:51:21.319 [INFO][4763] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" HandleID="k8s-pod-network.38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-0-0-4524070979.novalocal", "pod":"goldmane-5bd85449d4-9bxj5", "timestamp":"2025-06-20 19:51:21.319053892 +0000 UTC"}, Hostname:"ci-4344-1-0-0-4524070979.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:51:21.457617 containerd[1551]: 2025-06-20 19:51:21.320 [INFO][4763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:21.457617 containerd[1551]: 2025-06-20 19:51:21.320 [INFO][4763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:21.457617 containerd[1551]: 2025-06-20 19:51:21.320 [INFO][4763] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-0-4524070979.novalocal' Jun 20 19:51:21.457617 containerd[1551]: 2025-06-20 19:51:21.332 [INFO][4763] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:21.457617 containerd[1551]: 2025-06-20 19:51:21.341 [INFO][4763] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:21.457617 containerd[1551]: 2025-06-20 19:51:21.351 [INFO][4763] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:21.457617 containerd[1551]: 2025-06-20 19:51:21.359 [INFO][4763] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:21.457617 containerd[1551]: 2025-06-20 19:51:21.365 [INFO][4763] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:21.457898 containerd[1551]: 2025-06-20 19:51:21.365 [INFO][4763] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:21.457898 containerd[1551]: 2025-06-20 19:51:21.367 [INFO][4763] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38 Jun 20 19:51:21.457898 containerd[1551]: 2025-06-20 19:51:21.379 [INFO][4763] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:21.457898 containerd[1551]: 2025-06-20 19:51:21.399 [INFO][4763] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.198/26] block=192.168.47.192/26 handle="k8s-pod-network.38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:21.457898 containerd[1551]: 2025-06-20 19:51:21.399 [INFO][4763] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.198/26] handle="k8s-pod-network.38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:21.457898 containerd[1551]: 2025-06-20 19:51:21.399 [INFO][4763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:21.457898 containerd[1551]: 2025-06-20 19:51:21.399 [INFO][4763] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.198/26] IPv6=[] ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" HandleID="k8s-pod-network.38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0" Jun 20 19:51:21.458077 containerd[1551]: 2025-06-20 19:51:21.402 [INFO][4750] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Namespace="calico-system" Pod="goldmane-5bd85449d4-9bxj5" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"f93ace87-f2d2-41b7-9607-5b2310ab1ded", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"", Pod:"goldmane-5bd85449d4-9bxj5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali95aedb5b95d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:21.458153 containerd[1551]: 2025-06-20 19:51:21.402 [INFO][4750] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.198/32] ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Namespace="calico-system" Pod="goldmane-5bd85449d4-9bxj5" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0" Jun 20 19:51:21.458153 containerd[1551]: 2025-06-20 19:51:21.402 [INFO][4750] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95aedb5b95d ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Namespace="calico-system" Pod="goldmane-5bd85449d4-9bxj5" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0" Jun 20 19:51:21.458153 containerd[1551]: 2025-06-20 19:51:21.416 [INFO][4750] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Namespace="calico-system" Pod="goldmane-5bd85449d4-9bxj5" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0" Jun 20 19:51:21.461411 containerd[1551]: 2025-06-20 19:51:21.418 [INFO][4750] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Namespace="calico-system" Pod="goldmane-5bd85449d4-9bxj5" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"f93ace87-f2d2-41b7-9607-5b2310ab1ded", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38", Pod:"goldmane-5bd85449d4-9bxj5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali95aedb5b95d", MAC:"b2:c2:24:14:d8:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:21.461526 containerd[1551]: 2025-06-20 19:51:21.451 [INFO][4750] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" Namespace="calico-system" Pod="goldmane-5bd85449d4-9bxj5" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-goldmane--5bd85449d4--9bxj5-eth0" Jun 20 19:51:21.540097 containerd[1551]: time="2025-06-20T19:51:21.540032367Z" level=info msg="connecting to shim 38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38" address="unix:///run/containerd/s/839054d7af41fe23a0a59bf981c20cbdaa422caa5ef55c137ef8110b54648a04" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:51:21.643439 systemd[1]: Started cri-containerd-38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38.scope - libcontainer container 38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38. Jun 20 19:51:21.726507 containerd[1551]: time="2025-06-20T19:51:21.726457632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-9bxj5,Uid:f93ace87-f2d2-41b7-9607-5b2310ab1ded,Namespace:calico-system,Attempt:0,} returns sandbox id \"38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38\"" Jun 20 19:51:23.151150 containerd[1551]: time="2025-06-20T19:51:23.149339116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749bf4dccb-n8f2p,Uid:a4c109fd-2fe5-4963-8e66-cb4e40a83c1d,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:51:23.155431 containerd[1551]: time="2025-06-20T19:51:23.155360594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqp5r,Uid:0d4d76ec-5329-4cb9-bf25-ffb312bbf65b,Namespace:kube-system,Attempt:0,}" Jun 20 19:51:23.180448 systemd-networkd[1443]: cali95aedb5b95d: Gained IPv6LL Jun 20 19:51:23.442630 systemd-networkd[1443]: caliedd4e94b734: Link UP Jun 20 19:51:23.445426 systemd-networkd[1443]: caliedd4e94b734: Gained carrier Jun 20 19:51:23.493421 containerd[1551]: 2025-06-20 19:51:23.236 [INFO][4830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0 calico-apiserver-749bf4dccb- calico-apiserver a4c109fd-2fe5-4963-8e66-cb4e40a83c1d 850 0 2025-06-20 19:50:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:749bf4dccb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-1-0-0-4524070979.novalocal calico-apiserver-749bf4dccb-n8f2p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliedd4e94b734 [] [] }} ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-n8f2p" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-" Jun 20 19:51:23.493421 containerd[1551]: 2025-06-20 19:51:23.236 [INFO][4830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-n8f2p" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:51:23.493421 containerd[1551]: 2025-06-20 19:51:23.337 [INFO][4854] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:51:23.493782 containerd[1551]: 2025-06-20 19:51:23.337 [INFO][4854] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038ac40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-1-0-0-4524070979.novalocal", "pod":"calico-apiserver-749bf4dccb-n8f2p", "timestamp":"2025-06-20 19:51:23.33746369 +0000 UTC"}, Hostname:"ci-4344-1-0-0-4524070979.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:51:23.493782 containerd[1551]: 2025-06-20 19:51:23.337 [INFO][4854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:23.493782 containerd[1551]: 2025-06-20 19:51:23.338 [INFO][4854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:23.493782 containerd[1551]: 2025-06-20 19:51:23.338 [INFO][4854] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-0-4524070979.novalocal' Jun 20 19:51:23.493782 containerd[1551]: 2025-06-20 19:51:23.356 [INFO][4854] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.493782 containerd[1551]: 2025-06-20 19:51:23.367 [INFO][4854] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.493782 containerd[1551]: 2025-06-20 19:51:23.381 [INFO][4854] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.493782 containerd[1551]: 2025-06-20 19:51:23.386 [INFO][4854] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.493782 containerd[1551]: 2025-06-20 19:51:23.394 [INFO][4854] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.495443 containerd[1551]: 2025-06-20 19:51:23.394 [INFO][4854] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.495443 containerd[1551]: 2025-06-20 19:51:23.399 [INFO][4854] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712 Jun 20 19:51:23.495443 containerd[1551]: 2025-06-20 19:51:23.408 [INFO][4854] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.495443 containerd[1551]: 2025-06-20 19:51:23.420 [INFO][4854] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.199/26] block=192.168.47.192/26 handle="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.495443 containerd[1551]: 2025-06-20 19:51:23.420 [INFO][4854] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.199/26] handle="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.495443 containerd[1551]: 2025-06-20 19:51:23.421 [INFO][4854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:23.495443 containerd[1551]: 2025-06-20 19:51:23.421 [INFO][4854] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.199/26] IPv6=[] ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:51:23.495669 containerd[1551]: 2025-06-20 19:51:23.429 [INFO][4830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-n8f2p" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0", GenerateName:"calico-apiserver-749bf4dccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4c109fd-2fe5-4963-8e66-cb4e40a83c1d", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749bf4dccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"", Pod:"calico-apiserver-749bf4dccb-n8f2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd4e94b734", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:23.495744 containerd[1551]: 2025-06-20 19:51:23.430 [INFO][4830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.199/32] ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-n8f2p" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:51:23.495744 containerd[1551]: 2025-06-20 19:51:23.430 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedd4e94b734 ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-n8f2p" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:51:23.495744 containerd[1551]: 2025-06-20 19:51:23.449 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-n8f2p" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:51:23.495858 containerd[1551]: 2025-06-20 19:51:23.456 [INFO][4830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-n8f2p" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0", GenerateName:"calico-apiserver-749bf4dccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4c109fd-2fe5-4963-8e66-cb4e40a83c1d", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749bf4dccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712", Pod:"calico-apiserver-749bf4dccb-n8f2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd4e94b734", MAC:"6e:31:a5:7b:95:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:23.495945 containerd[1551]: 2025-06-20 19:51:23.485 [INFO][4830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Namespace="calico-apiserver" Pod="calico-apiserver-749bf4dccb-n8f2p" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:51:23.617344 systemd-networkd[1443]: cali37716b254b0: Link UP Jun 20 19:51:23.620968 systemd-networkd[1443]: cali37716b254b0: Gained carrier Jun 20 19:51:23.630597 containerd[1551]: time="2025-06-20T19:51:23.630525923Z" level=info msg="connecting to shim 50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" address="unix:///run/containerd/s/618f957ca7050924635a355c417c06939b463194c9308b0f1921ce66d81dd259" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:51:23.673363 containerd[1551]: 2025-06-20 19:51:23.346 [INFO][4842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0 coredns-674b8bbfcf- kube-system 0d4d76ec-5329-4cb9-bf25-ffb312bbf65b 844 0 2025-06-20 19:50:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-1-0-0-4524070979.novalocal coredns-674b8bbfcf-fqp5r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali37716b254b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqp5r" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-" Jun 20 19:51:23.673363 containerd[1551]: 2025-06-20 19:51:23.347 [INFO][4842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqp5r" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0" Jun 20 19:51:23.673363 containerd[1551]: 2025-06-20 19:51:23.464 [INFO][4863] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" HandleID="k8s-pod-network.45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0" Jun 20 19:51:23.674228 containerd[1551]: 2025-06-20 19:51:23.465 [INFO][4863] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" HandleID="k8s-pod-network.45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8140), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-1-0-0-4524070979.novalocal", "pod":"coredns-674b8bbfcf-fqp5r", "timestamp":"2025-06-20 19:51:23.46483635 +0000 UTC"}, Hostname:"ci-4344-1-0-0-4524070979.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:51:23.674228 containerd[1551]: 2025-06-20 19:51:23.465 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:23.674228 containerd[1551]: 2025-06-20 19:51:23.466 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:23.674228 containerd[1551]: 2025-06-20 19:51:23.466 [INFO][4863] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-0-4524070979.novalocal' Jun 20 19:51:23.674228 containerd[1551]: 2025-06-20 19:51:23.484 [INFO][4863] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.674228 containerd[1551]: 2025-06-20 19:51:23.504 [INFO][4863] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.674228 containerd[1551]: 2025-06-20 19:51:23.517 [INFO][4863] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.674228 containerd[1551]: 2025-06-20 19:51:23.522 [INFO][4863] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.674228 containerd[1551]: 2025-06-20 19:51:23.529 [INFO][4863] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.675871 containerd[1551]: 2025-06-20 19:51:23.536 [INFO][4863] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.675871 containerd[1551]: 2025-06-20 19:51:23.543 [INFO][4863] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71 Jun 20 19:51:23.675871 containerd[1551]: 2025-06-20 19:51:23.564 [INFO][4863] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.675871 containerd[1551]: 2025-06-20 19:51:23.592 [INFO][4863] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.200/26] block=192.168.47.192/26 handle="k8s-pod-network.45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.675871 containerd[1551]: 2025-06-20 19:51:23.592 [INFO][4863] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.200/26] handle="k8s-pod-network.45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:23.675871 containerd[1551]: 2025-06-20 19:51:23.596 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:23.675871 containerd[1551]: 2025-06-20 19:51:23.596 [INFO][4863] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.200/26] IPv6=[] ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" HandleID="k8s-pod-network.45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0" Jun 20 19:51:23.676147 containerd[1551]: 2025-06-20 19:51:23.609 [INFO][4842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqp5r" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0d4d76ec-5329-4cb9-bf25-ffb312bbf65b", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"", Pod:"coredns-674b8bbfcf-fqp5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali37716b254b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:23.676147 containerd[1551]: 2025-06-20 19:51:23.610 [INFO][4842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.200/32] ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqp5r" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0" Jun 20 19:51:23.676147 containerd[1551]: 2025-06-20 19:51:23.610 [INFO][4842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37716b254b0 ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqp5r" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0" Jun 20 19:51:23.676147 containerd[1551]: 2025-06-20 19:51:23.627 [INFO][4842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqp5r" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0" Jun 20 19:51:23.676147 containerd[1551]: 2025-06-20 19:51:23.640 [INFO][4842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqp5r" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0d4d76ec-5329-4cb9-bf25-ffb312bbf65b", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71", Pod:"coredns-674b8bbfcf-fqp5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali37716b254b0", MAC:"d2:ab:15:ff:f0:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:23.676147 containerd[1551]: 2025-06-20 19:51:23.665 [INFO][4842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqp5r" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--fqp5r-eth0" Jun 20 19:51:23.702743 systemd[1]: Started cri-containerd-50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712.scope - libcontainer container 50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712. Jun 20 19:51:23.759190 containerd[1551]: time="2025-06-20T19:51:23.759109876Z" level=info msg="connecting to shim 45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71" address="unix:///run/containerd/s/bbc153c055dda2ca3a8c2faeee65c0bb86c6ad523160428f5a4d296e736c950d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:51:23.828584 systemd[1]: Started cri-containerd-45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71.scope - libcontainer container 45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71. Jun 20 19:51:23.838265 containerd[1551]: time="2025-06-20T19:51:23.838225851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749bf4dccb-n8f2p,Uid:a4c109fd-2fe5-4963-8e66-cb4e40a83c1d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\"" Jun 20 19:51:23.910732 containerd[1551]: time="2025-06-20T19:51:23.910685710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqp5r,Uid:0d4d76ec-5329-4cb9-bf25-ffb312bbf65b,Namespace:kube-system,Attempt:0,} returns sandbox id \"45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71\"" Jun 20 19:51:23.922709 containerd[1551]: time="2025-06-20T19:51:23.921768156Z" level=info msg="CreateContainer within sandbox \"45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:51:23.940307 containerd[1551]: time="2025-06-20T19:51:23.940246441Z" level=info msg="Container 691428ff0d288bfcb6548ae7648341be680fa59e783e9c959c884e2c2a3279a5: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:23.952089 containerd[1551]: time="2025-06-20T19:51:23.952036070Z" level=info msg="CreateContainer within sandbox \"45ee66dfe47cb9a717622117b1f8a1a60dd8a41f6a24b549e160deabffc6fb71\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"691428ff0d288bfcb6548ae7648341be680fa59e783e9c959c884e2c2a3279a5\"" Jun 20 19:51:23.954429 containerd[1551]: time="2025-06-20T19:51:23.954076596Z" level=info msg="StartContainer for \"691428ff0d288bfcb6548ae7648341be680fa59e783e9c959c884e2c2a3279a5\"" Jun 20 19:51:23.964206 containerd[1551]: time="2025-06-20T19:51:23.962954998Z" level=info msg="connecting to shim 691428ff0d288bfcb6548ae7648341be680fa59e783e9c959c884e2c2a3279a5" address="unix:///run/containerd/s/bbc153c055dda2ca3a8c2faeee65c0bb86c6ad523160428f5a4d296e736c950d" protocol=ttrpc version=3 Jun 20 19:51:24.009071 systemd[1]: Started cri-containerd-691428ff0d288bfcb6548ae7648341be680fa59e783e9c959c884e2c2a3279a5.scope - libcontainer container 691428ff0d288bfcb6548ae7648341be680fa59e783e9c959c884e2c2a3279a5. Jun 20 19:51:24.103671 containerd[1551]: time="2025-06-20T19:51:24.103457242Z" level=info msg="StartContainer for \"691428ff0d288bfcb6548ae7648341be680fa59e783e9c959c884e2c2a3279a5\" returns successfully" Jun 20 19:51:24.492254 containerd[1551]: time="2025-06-20T19:51:24.491529280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:24.494128 containerd[1551]: time="2025-06-20T19:51:24.494084495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 20 19:51:24.498882 containerd[1551]: time="2025-06-20T19:51:24.498778301Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:24.504236 containerd[1551]: time="2025-06-20T19:51:24.503712260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:24.505188 containerd[1551]: time="2025-06-20T19:51:24.504909346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 5.30898009s" Jun 20 19:51:24.505188 containerd[1551]: time="2025-06-20T19:51:24.504974929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:51:24.507790 containerd[1551]: time="2025-06-20T19:51:24.507761672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:51:24.522790 containerd[1551]: time="2025-06-20T19:51:24.522748777Z" level=info msg="CreateContainer within sandbox \"1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:51:24.551381 containerd[1551]: time="2025-06-20T19:51:24.551312869Z" level=info msg="Container f0bbebdb6368e887bba0008564de745c114f3da09b4c52f890614802d6ee1dcf: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:24.589192 containerd[1551]: time="2025-06-20T19:51:24.588976410Z" level=info msg="CreateContainer within sandbox \"1f0bea866646969a2820c214bcbfbd70fed3df274956e673c06859bf577c1254\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f0bbebdb6368e887bba0008564de745c114f3da09b4c52f890614802d6ee1dcf\"" Jun 20 19:51:24.590040 containerd[1551]: time="2025-06-20T19:51:24.590012863Z" level=info msg="StartContainer for \"f0bbebdb6368e887bba0008564de745c114f3da09b4c52f890614802d6ee1dcf\"" Jun 20 19:51:24.592817 containerd[1551]: time="2025-06-20T19:51:24.592638201Z" level=info msg="connecting to shim f0bbebdb6368e887bba0008564de745c114f3da09b4c52f890614802d6ee1dcf" address="unix:///run/containerd/s/0f66bf4ba947531db1b8ad836a077c4158d40968459a5067000e7fa28cf01c7e" protocol=ttrpc version=3 Jun 20 19:51:24.650419 systemd[1]: Started cri-containerd-f0bbebdb6368e887bba0008564de745c114f3da09b4c52f890614802d6ee1dcf.scope - libcontainer container f0bbebdb6368e887bba0008564de745c114f3da09b4c52f890614802d6ee1dcf. Jun 20 19:51:24.780512 kubelet[2815]: I0620 19:51:24.780291 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fqp5r" podStartSLOduration=60.780238566 podStartE2EDuration="1m0.780238566s" podCreationTimestamp="2025-06-20 19:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:51:24.777798127 +0000 UTC m=+65.792982896" watchObservedRunningTime="2025-06-20 19:51:24.780238566 +0000 UTC m=+65.795423335" Jun 20 19:51:24.811696 containerd[1551]: time="2025-06-20T19:51:24.811630047Z" level=info msg="StartContainer for \"f0bbebdb6368e887bba0008564de745c114f3da09b4c52f890614802d6ee1dcf\" returns successfully" Jun 20 19:51:24.907423 systemd-networkd[1443]: cali37716b254b0: Gained IPv6LL Jun 20 19:51:25.232604 containerd[1551]: time="2025-06-20T19:51:25.232531745Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:25.235913 containerd[1551]: time="2025-06-20T19:51:25.234821100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:51:25.239278 containerd[1551]: time="2025-06-20T19:51:25.239220892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 731.328454ms" Jun 20 19:51:25.239278 containerd[1551]: time="2025-06-20T19:51:25.239271788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:51:25.244487 containerd[1551]: time="2025-06-20T19:51:25.244317457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:51:25.263486 containerd[1551]: time="2025-06-20T19:51:25.263324778Z" level=info msg="CreateContainer within sandbox \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:51:25.292647 systemd-networkd[1443]: caliedd4e94b734: Gained IPv6LL Jun 20 19:51:25.297992 containerd[1551]: time="2025-06-20T19:51:25.297915518Z" level=info msg="Container 8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:25.322524 containerd[1551]: time="2025-06-20T19:51:25.322439697Z" level=info msg="CreateContainer within sandbox \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\"" Jun 20 19:51:25.324879 containerd[1551]: time="2025-06-20T19:51:25.324770249Z" level=info msg="StartContainer for \"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\"" Jun 20 19:51:25.327698 containerd[1551]: time="2025-06-20T19:51:25.327646880Z" level=info msg="connecting to shim 8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf" address="unix:///run/containerd/s/51c2587872949a9d5e11249b42968b4bbe0182f9ce7c8222114a2c0a2e037142" protocol=ttrpc version=3 Jun 20 19:51:25.371824 systemd[1]: Started cri-containerd-8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf.scope - libcontainer container 8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf. Jun 20 19:51:25.466584 containerd[1551]: time="2025-06-20T19:51:25.466531163Z" level=info msg="StartContainer for \"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\" returns successfully" Jun 20 19:51:25.790389 kubelet[2815]: I0620 19:51:25.790266 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56bd6d945d-6pq8f" podStartSLOduration=36.492773644 podStartE2EDuration="48.790233987s" podCreationTimestamp="2025-06-20 19:50:37 +0000 UTC" firstStartedPulling="2025-06-20 19:51:12.209074216 +0000 UTC m=+53.224258965" lastFinishedPulling="2025-06-20 19:51:24.506534559 +0000 UTC m=+65.521719308" observedRunningTime="2025-06-20 19:51:25.76974299 +0000 UTC m=+66.784927739" watchObservedRunningTime="2025-06-20 19:51:25.790233987 +0000 UTC m=+66.805418746" Jun 20 19:51:26.148616 containerd[1551]: time="2025-06-20T19:51:26.148554371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v7lms,Uid:5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc,Namespace:kube-system,Attempt:0,}" Jun 20 19:51:26.407687 systemd-networkd[1443]: cali7dd7b054bc4: Link UP Jun 20 19:51:26.409623 systemd-networkd[1443]: cali7dd7b054bc4: Gained carrier Jun 20 19:51:26.437049 kubelet[2815]: I0620 19:51:26.436925 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-749bf4dccb-4wrqp" podStartSLOduration=37.439438366 podStartE2EDuration="50.436643284s" podCreationTimestamp="2025-06-20 19:50:36 +0000 UTC" firstStartedPulling="2025-06-20 19:51:12.245506031 +0000 UTC m=+53.260690790" lastFinishedPulling="2025-06-20 19:51:25.242710909 +0000 UTC m=+66.257895708" observedRunningTime="2025-06-20 19:51:25.852693128 +0000 UTC m=+66.867877887" watchObservedRunningTime="2025-06-20 19:51:26.436643284 +0000 UTC m=+67.451828044" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.252 [INFO][5102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0 coredns-674b8bbfcf- kube-system 5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc 849 0 2025-06-20 19:50:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-1-0-0-4524070979.novalocal coredns-674b8bbfcf-v7lms eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7dd7b054bc4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-v7lms" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.253 [INFO][5102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-v7lms" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.319 [INFO][5114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" HandleID="k8s-pod-network.0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.320 [INFO][5114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" HandleID="k8s-pod-network.0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00022f9f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-1-0-0-4524070979.novalocal", "pod":"coredns-674b8bbfcf-v7lms", "timestamp":"2025-06-20 19:51:26.319716925 +0000 UTC"}, Hostname:"ci-4344-1-0-0-4524070979.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.320 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.320 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.320 [INFO][5114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-0-4524070979.novalocal' Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.332 [INFO][5114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.342 [INFO][5114] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.351 [INFO][5114] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.353 [INFO][5114] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.357 [INFO][5114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.357 [INFO][5114] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.360 [INFO][5114] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.373 [INFO][5114] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.395 [INFO][5114] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.201/26] block=192.168.47.192/26 handle="k8s-pod-network.0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.395 [INFO][5114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.201/26] handle="k8s-pod-network.0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.395 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:26.442634 containerd[1551]: 2025-06-20 19:51:26.395 [INFO][5114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.201/26] IPv6=[] ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" HandleID="k8s-pod-network.0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0" Jun 20 19:51:26.445476 containerd[1551]: 2025-06-20 19:51:26.399 [INFO][5102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-v7lms" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"", Pod:"coredns-674b8bbfcf-v7lms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7dd7b054bc4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:26.445476 containerd[1551]: 2025-06-20 19:51:26.400 [INFO][5102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.201/32] ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-v7lms" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0" Jun 20 19:51:26.445476 containerd[1551]: 2025-06-20 19:51:26.400 [INFO][5102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7dd7b054bc4 ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-v7lms" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0" Jun 20 19:51:26.445476 containerd[1551]: 2025-06-20 19:51:26.409 [INFO][5102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-v7lms" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0" Jun 20 19:51:26.445476 containerd[1551]: 2025-06-20 19:51:26.411 [INFO][5102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-v7lms" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 50, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c", Pod:"coredns-674b8bbfcf-v7lms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7dd7b054bc4", MAC:"c6:db:d2:32:05:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:26.445476 containerd[1551]: 2025-06-20 19:51:26.438 [INFO][5102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-v7lms" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-coredns--674b8bbfcf--v7lms-eth0" Jun 20 19:51:26.501116 containerd[1551]: time="2025-06-20T19:51:26.500408357Z" level=info msg="connecting to shim 0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c" address="unix:///run/containerd/s/140af9f2703604db3d53ee1fd7de0d451193a57b6ab316c5499cdea33eba0a16" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:51:26.555293 systemd[1]: Started cri-containerd-0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c.scope - libcontainer container 0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c. Jun 20 19:51:26.749141 kubelet[2815]: I0620 19:51:26.748720 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:51:26.892861 containerd[1551]: time="2025-06-20T19:51:26.892326297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v7lms,Uid:5868c2f2-8e4f-4f1f-9b42-392b9fdd6abc,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c\"" Jun 20 19:51:26.904112 containerd[1551]: time="2025-06-20T19:51:26.903867598Z" level=info msg="CreateContainer within sandbox \"0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:51:26.935207 containerd[1551]: time="2025-06-20T19:51:26.931497519Z" level=info msg="Container b83ca04e8a2019525e178f1206769075480af8a10eeb2077398d88f04d987669: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:26.959501 containerd[1551]: time="2025-06-20T19:51:26.959436543Z" level=info msg="CreateContainer within sandbox \"0d1ca9914adf725026805d38c7baf24efac5b8667cd460f324b085d4ca644d8c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b83ca04e8a2019525e178f1206769075480af8a10eeb2077398d88f04d987669\"" Jun 20 19:51:26.961309 containerd[1551]: time="2025-06-20T19:51:26.961274377Z" level=info msg="StartContainer for \"b83ca04e8a2019525e178f1206769075480af8a10eeb2077398d88f04d987669\"" Jun 20 19:51:26.962416 containerd[1551]: time="2025-06-20T19:51:26.962381173Z" level=info msg="connecting to shim b83ca04e8a2019525e178f1206769075480af8a10eeb2077398d88f04d987669" address="unix:///run/containerd/s/140af9f2703604db3d53ee1fd7de0d451193a57b6ab316c5499cdea33eba0a16" protocol=ttrpc version=3 Jun 20 19:51:26.994760 systemd[1]: Started cri-containerd-b83ca04e8a2019525e178f1206769075480af8a10eeb2077398d88f04d987669.scope - libcontainer container b83ca04e8a2019525e178f1206769075480af8a10eeb2077398d88f04d987669. Jun 20 19:51:27.083478 containerd[1551]: time="2025-06-20T19:51:27.083429590Z" level=info msg="StartContainer for \"b83ca04e8a2019525e178f1206769075480af8a10eeb2077398d88f04d987669\" returns successfully" Jun 20 19:51:27.659474 systemd-networkd[1443]: cali7dd7b054bc4: Gained IPv6LL Jun 20 19:51:27.763820 kubelet[2815]: I0620 19:51:27.763718 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:51:27.848456 kubelet[2815]: I0620 19:51:27.847026 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v7lms" podStartSLOduration=63.847002257 podStartE2EDuration="1m3.847002257s" podCreationTimestamp="2025-06-20 19:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:51:27.790731701 +0000 UTC m=+68.805916450" watchObservedRunningTime="2025-06-20 19:51:27.847002257 +0000 UTC m=+68.862187016" Jun 20 19:51:28.698850 containerd[1551]: time="2025-06-20T19:51:28.697952542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:28.699570 containerd[1551]: time="2025-06-20T19:51:28.699546024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 20 19:51:28.701083 containerd[1551]: time="2025-06-20T19:51:28.701054928Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:28.708684 containerd[1551]: time="2025-06-20T19:51:28.708642417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:28.710192 containerd[1551]: time="2025-06-20T19:51:28.710147905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 3.465738645s" Jun 20 19:51:28.710313 containerd[1551]: time="2025-06-20T19:51:28.710289431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 20 19:51:28.713583 containerd[1551]: time="2025-06-20T19:51:28.713523347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:51:28.719686 containerd[1551]: time="2025-06-20T19:51:28.719607252Z" level=info msg="CreateContainer within sandbox \"d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:51:28.738208 containerd[1551]: time="2025-06-20T19:51:28.737286640Z" level=info msg="Container be20c841544bee44c9fb39bb7ae0458b260ca16492d97dd0ef57f69aa2683d53: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:28.747474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount93656752.mount: Deactivated successfully. Jun 20 19:51:28.753210 containerd[1551]: time="2025-06-20T19:51:28.753146349Z" level=info msg="CreateContainer within sandbox \"d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"be20c841544bee44c9fb39bb7ae0458b260ca16492d97dd0ef57f69aa2683d53\"" Jun 20 19:51:28.754248 containerd[1551]: time="2025-06-20T19:51:28.754187061Z" level=info msg="StartContainer for \"be20c841544bee44c9fb39bb7ae0458b260ca16492d97dd0ef57f69aa2683d53\"" Jun 20 19:51:28.756498 containerd[1551]: time="2025-06-20T19:51:28.756428965Z" level=info msg="connecting to shim be20c841544bee44c9fb39bb7ae0458b260ca16492d97dd0ef57f69aa2683d53" address="unix:///run/containerd/s/925e1a48b3c41aaac75320c13be5d28b57a549ad3d9ce64efad364e9cedc04cf" protocol=ttrpc version=3 Jun 20 19:51:28.796449 systemd[1]: Started cri-containerd-be20c841544bee44c9fb39bb7ae0458b260ca16492d97dd0ef57f69aa2683d53.scope - libcontainer container be20c841544bee44c9fb39bb7ae0458b260ca16492d97dd0ef57f69aa2683d53. Jun 20 19:51:28.915387 containerd[1551]: time="2025-06-20T19:51:28.915311835Z" level=info msg="StartContainer for \"be20c841544bee44c9fb39bb7ae0458b260ca16492d97dd0ef57f69aa2683d53\" returns successfully" Jun 20 19:51:30.897897 containerd[1551]: time="2025-06-20T19:51:30.897817018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:30.899408 containerd[1551]: time="2025-06-20T19:51:30.899351691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 20 19:51:30.900978 containerd[1551]: time="2025-06-20T19:51:30.900919275Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:30.904013 containerd[1551]: time="2025-06-20T19:51:30.903919549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:30.904743 containerd[1551]: time="2025-06-20T19:51:30.904577679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 2.191010561s" Jun 20 19:51:30.904743 containerd[1551]: time="2025-06-20T19:51:30.904613537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 20 19:51:30.907098 containerd[1551]: time="2025-06-20T19:51:30.907054978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:51:30.917576 containerd[1551]: time="2025-06-20T19:51:30.917405274Z" level=info msg="CreateContainer within sandbox \"c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:51:30.932008 containerd[1551]: time="2025-06-20T19:51:30.931950115Z" level=info msg="Container 2f9b20a0b3f422798d2f97521f8e9ada519fbc16403fc470ceaaa7436d3c1c15: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:30.950590 containerd[1551]: time="2025-06-20T19:51:30.950518578Z" level=info msg="CreateContainer within sandbox \"c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2f9b20a0b3f422798d2f97521f8e9ada519fbc16403fc470ceaaa7436d3c1c15\"" Jun 20 19:51:30.952860 containerd[1551]: time="2025-06-20T19:51:30.952711702Z" level=info msg="StartContainer for \"2f9b20a0b3f422798d2f97521f8e9ada519fbc16403fc470ceaaa7436d3c1c15\"" Jun 20 19:51:30.956673 containerd[1551]: time="2025-06-20T19:51:30.956566436Z" level=info msg="connecting to shim 2f9b20a0b3f422798d2f97521f8e9ada519fbc16403fc470ceaaa7436d3c1c15" address="unix:///run/containerd/s/48568cbf66f6fb64e95fee85e80a1e1b21c82c29037b622e85847ca98e1001e1" protocol=ttrpc version=3 Jun 20 19:51:30.986551 systemd[1]: Started cri-containerd-2f9b20a0b3f422798d2f97521f8e9ada519fbc16403fc470ceaaa7436d3c1c15.scope - libcontainer container 2f9b20a0b3f422798d2f97521f8e9ada519fbc16403fc470ceaaa7436d3c1c15. Jun 20 19:51:31.109981 containerd[1551]: time="2025-06-20T19:51:31.109891639Z" level=info msg="StartContainer for \"2f9b20a0b3f422798d2f97521f8e9ada519fbc16403fc470ceaaa7436d3c1c15\" returns successfully" Jun 20 19:51:35.749519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439735006.mount: Deactivated successfully. Jun 20 19:51:36.727119 containerd[1551]: time="2025-06-20T19:51:36.727048492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:36.728607 containerd[1551]: time="2025-06-20T19:51:36.728554340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 20 19:51:36.730042 containerd[1551]: time="2025-06-20T19:51:36.729984214Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:36.735257 containerd[1551]: time="2025-06-20T19:51:36.735164026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:36.736442 containerd[1551]: time="2025-06-20T19:51:36.736301499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 5.829215943s" Jun 20 19:51:36.736442 containerd[1551]: time="2025-06-20T19:51:36.736344781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 20 19:51:36.737987 containerd[1551]: time="2025-06-20T19:51:36.737917104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:51:36.746420 containerd[1551]: time="2025-06-20T19:51:36.746387958Z" level=info msg="CreateContainer within sandbox \"38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:51:36.759325 containerd[1551]: time="2025-06-20T19:51:36.759243575Z" level=info msg="Container f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:36.777836 containerd[1551]: time="2025-06-20T19:51:36.777776291Z" level=info msg="CreateContainer within sandbox \"38c823638633ecdae6a9673dfa7b292c8035fb6510cf8dd20edc1b83ae792a38\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\"" Jun 20 19:51:36.780083 containerd[1551]: time="2025-06-20T19:51:36.780049795Z" level=info msg="StartContainer for \"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\"" Jun 20 19:51:36.781913 containerd[1551]: time="2025-06-20T19:51:36.781885544Z" level=info msg="connecting to shim f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26" address="unix:///run/containerd/s/839054d7af41fe23a0a59bf981c20cbdaa422caa5ef55c137ef8110b54648a04" protocol=ttrpc version=3 Jun 20 19:51:36.817334 systemd[1]: Started cri-containerd-f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26.scope - libcontainer container f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26. Jun 20 19:51:36.896849 containerd[1551]: time="2025-06-20T19:51:36.896699998Z" level=info msg="StartContainer for \"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" returns successfully" Jun 20 19:51:37.211353 containerd[1551]: time="2025-06-20T19:51:37.211268724Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:37.213214 containerd[1551]: time="2025-06-20T19:51:37.212588662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:51:37.215941 containerd[1551]: time="2025-06-20T19:51:37.215898690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 477.939166ms" Jun 20 19:51:37.216356 containerd[1551]: time="2025-06-20T19:51:37.216110869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:51:37.218414 containerd[1551]: time="2025-06-20T19:51:37.218382220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:51:37.228992 containerd[1551]: time="2025-06-20T19:51:37.228734760Z" level=info msg="CreateContainer within sandbox \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:51:37.245960 containerd[1551]: time="2025-06-20T19:51:37.245911350Z" level=info msg="Container 755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:37.268361 containerd[1551]: time="2025-06-20T19:51:37.268285154Z" level=info msg="CreateContainer within sandbox \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\"" Jun 20 19:51:37.271231 containerd[1551]: time="2025-06-20T19:51:37.270494749Z" level=info msg="StartContainer for \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\"" Jun 20 19:51:37.272073 containerd[1551]: time="2025-06-20T19:51:37.272050039Z" level=info msg="connecting to shim 755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1" address="unix:///run/containerd/s/618f957ca7050924635a355c417c06939b463194c9308b0f1921ce66d81dd259" protocol=ttrpc version=3 Jun 20 19:51:37.303371 systemd[1]: Started cri-containerd-755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1.scope - libcontainer container 755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1. Jun 20 19:51:37.377945 containerd[1551]: time="2025-06-20T19:51:37.377908132Z" level=info msg="StartContainer for \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" returns successfully" Jun 20 19:51:37.878201 kubelet[2815]: I0620 19:51:37.877506 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-749bf4dccb-n8f2p" podStartSLOduration=48.500224842 podStartE2EDuration="1m1.877484573s" podCreationTimestamp="2025-06-20 19:50:36 +0000 UTC" firstStartedPulling="2025-06-20 19:51:23.840814909 +0000 UTC m=+64.855999659" lastFinishedPulling="2025-06-20 19:51:37.21807464 +0000 UTC m=+78.233259390" observedRunningTime="2025-06-20 19:51:37.870862915 +0000 UTC m=+78.886047704" watchObservedRunningTime="2025-06-20 19:51:37.877484573 +0000 UTC m=+78.892669323" Jun 20 19:51:37.935630 kubelet[2815]: I0620 19:51:37.935541 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-9bxj5" podStartSLOduration=42.925967946 podStartE2EDuration="57.935522708s" podCreationTimestamp="2025-06-20 19:50:40 +0000 UTC" firstStartedPulling="2025-06-20 19:51:21.728149762 +0000 UTC m=+62.743334511" lastFinishedPulling="2025-06-20 19:51:36.737704504 +0000 UTC m=+77.752889273" observedRunningTime="2025-06-20 19:51:37.93486571 +0000 UTC m=+78.950050459" watchObservedRunningTime="2025-06-20 19:51:37.935522708 +0000 UTC m=+78.950707477" Jun 20 19:51:38.246444 containerd[1551]: time="2025-06-20T19:51:38.245771132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"7cbb872dcb5295e2c944360d5a995113a53986e680fb5d7136ebbc3dc63dc342\" pid:5394 exit_status:1 exited_at:{seconds:1750449098 nanos:245140143}" Jun 20 19:51:38.847208 kubelet[2815]: I0620 19:51:38.845679 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:51:38.980600 update_engine[1538]: I20250620 19:51:38.980252 1538 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 19:51:38.980600 update_engine[1538]: I20250620 19:51:38.980602 1538 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 19:51:38.982963 update_engine[1538]: I20250620 19:51:38.982903 1538 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 19:51:38.989290 update_engine[1538]: I20250620 19:51:38.987680 1538 omaha_request_params.cc:62] Current group set to beta Jun 20 19:51:39.001214 update_engine[1538]: I20250620 19:51:39.000860 1538 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 19:51:39.001214 update_engine[1538]: I20250620 19:51:39.000906 1538 update_attempter.cc:643] Scheduling an action processor start. Jun 20 19:51:39.001214 update_engine[1538]: I20250620 19:51:39.000942 1538 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 19:51:39.025218 update_engine[1538]: I20250620 19:51:39.025121 1538 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 19:51:39.025413 update_engine[1538]: I20250620 19:51:39.025307 1538 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 19:51:39.025413 update_engine[1538]: I20250620 19:51:39.025322 1538 omaha_request_action.cc:272] Request: Jun 20 19:51:39.025413 update_engine[1538]: Jun 20 19:51:39.025413 update_engine[1538]: Jun 20 19:51:39.025413 update_engine[1538]: Jun 20 19:51:39.025413 update_engine[1538]: Jun 20 19:51:39.025413 update_engine[1538]: Jun 20 19:51:39.025413 update_engine[1538]: Jun 20 19:51:39.025413 update_engine[1538]: Jun 20 19:51:39.025413 update_engine[1538]: Jun 20 19:51:39.025413 update_engine[1538]: I20250620 19:51:39.025332 1538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:51:39.028692 locksmithd[1590]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 19:51:39.031261 containerd[1551]: time="2025-06-20T19:51:39.031215829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"27cc569d548c85229f0f17ef94a0463e1727a8dd424bf9bcab86b7bf8fae0400\" pid:5422 exit_status:1 exited_at:{seconds:1750449099 nanos:30721617}" Jun 20 19:51:39.043342 update_engine[1538]: I20250620 19:51:39.043279 1538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:51:39.043924 update_engine[1538]: I20250620 19:51:39.043871 1538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:51:39.051914 update_engine[1538]: E20250620 19:51:39.051868 1538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:51:39.052109 update_engine[1538]: I20250620 19:51:39.052050 1538 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 19:51:39.984874 containerd[1551]: time="2025-06-20T19:51:39.984817717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"b77aae2ccb5295278f822d772acb6b47a69da704cb16bdd59a6ce47c78ac84ac\" pid:5447 exit_status:1 exited_at:{seconds:1750449099 nanos:984101488}" Jun 20 19:51:40.958687 containerd[1551]: time="2025-06-20T19:51:40.958626453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:40.959876 containerd[1551]: time="2025-06-20T19:51:40.959800887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 20 19:51:40.961104 containerd[1551]: time="2025-06-20T19:51:40.961067734Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:40.964571 containerd[1551]: time="2025-06-20T19:51:40.964524047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:40.965299 containerd[1551]: time="2025-06-20T19:51:40.965265314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 3.746845183s" Jun 20 19:51:40.965397 containerd[1551]: time="2025-06-20T19:51:40.965301582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 20 19:51:40.967659 containerd[1551]: time="2025-06-20T19:51:40.967621474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:51:40.975422 containerd[1551]: time="2025-06-20T19:51:40.975362963Z" level=info msg="CreateContainer within sandbox \"d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:51:40.999198 containerd[1551]: time="2025-06-20T19:51:40.998351095Z" level=info msg="Container 0634355c8a13017233da1f6234f044ee0860101053c5d083290222e1214ad64d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:41.022086 containerd[1551]: time="2025-06-20T19:51:41.021960077Z" level=info msg="CreateContainer within sandbox \"d358be86acbb9a814aca9ac8c9343cfb66491aaf6ec6d41371c55a1e23caf86d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0634355c8a13017233da1f6234f044ee0860101053c5d083290222e1214ad64d\"" Jun 20 19:51:41.024721 containerd[1551]: time="2025-06-20T19:51:41.024673700Z" level=info msg="StartContainer for \"0634355c8a13017233da1f6234f044ee0860101053c5d083290222e1214ad64d\"" Jun 20 19:51:41.027017 containerd[1551]: time="2025-06-20T19:51:41.026936986Z" level=info msg="connecting to shim 0634355c8a13017233da1f6234f044ee0860101053c5d083290222e1214ad64d" address="unix:///run/containerd/s/925e1a48b3c41aaac75320c13be5d28b57a549ad3d9ce64efad364e9cedc04cf" protocol=ttrpc version=3 Jun 20 19:51:41.086424 systemd[1]: Started cri-containerd-0634355c8a13017233da1f6234f044ee0860101053c5d083290222e1214ad64d.scope - libcontainer container 0634355c8a13017233da1f6234f044ee0860101053c5d083290222e1214ad64d. Jun 20 19:51:41.203335 containerd[1551]: time="2025-06-20T19:51:41.203157557Z" level=info msg="StartContainer for \"0634355c8a13017233da1f6234f044ee0860101053c5d083290222e1214ad64d\" returns successfully" Jun 20 19:51:41.305042 kubelet[2815]: I0620 19:51:41.304991 2815 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:51:41.309086 kubelet[2815]: I0620 19:51:41.308910 2815 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:51:41.884937 kubelet[2815]: I0620 19:51:41.884792 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5ldnw" podStartSLOduration=33.30506545 podStartE2EDuration="1m1.884769598s" podCreationTimestamp="2025-06-20 19:50:40 +0000 UTC" firstStartedPulling="2025-06-20 19:51:12.38668286 +0000 UTC m=+53.401867609" lastFinishedPulling="2025-06-20 19:51:40.966387008 +0000 UTC m=+81.981571757" observedRunningTime="2025-06-20 19:51:41.88184763 +0000 UTC m=+82.897032379" watchObservedRunningTime="2025-06-20 19:51:41.884769598 +0000 UTC m=+82.899954357" Jun 20 19:51:42.929838 containerd[1551]: time="2025-06-20T19:51:42.929718628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" id:\"d28708c17de2c5ff67450ec7bfdfed0895700c196e8e3e94af98afe59a879f42\" pid:5510 exited_at:{seconds:1750449102 nanos:928964969}" Jun 20 19:51:44.532871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738530747.mount: Deactivated successfully. Jun 20 19:51:44.612111 containerd[1551]: time="2025-06-20T19:51:44.612032574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:44.613761 containerd[1551]: time="2025-06-20T19:51:44.613727779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 20 19:51:44.615137 containerd[1551]: time="2025-06-20T19:51:44.614747791Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:44.619002 containerd[1551]: time="2025-06-20T19:51:44.618944019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:51:44.622196 containerd[1551]: time="2025-06-20T19:51:44.622005848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 3.65434522s" Jun 20 19:51:44.622938 containerd[1551]: time="2025-06-20T19:51:44.622298801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 20 19:51:44.635782 containerd[1551]: time="2025-06-20T19:51:44.635633419Z" level=info msg="CreateContainer within sandbox \"c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:51:44.654341 containerd[1551]: time="2025-06-20T19:51:44.653381637Z" level=info msg="Container 94197a17d08b527f8121d4460791ff7be4e5d50461006097f05e2a3fc5185ed0: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:44.695866 containerd[1551]: time="2025-06-20T19:51:44.695766200Z" level=info msg="CreateContainer within sandbox \"c2a42ca9730f4493c0a08af623eb3d9b75651ec2abe52730470fc80443469014\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"94197a17d08b527f8121d4460791ff7be4e5d50461006097f05e2a3fc5185ed0\"" Jun 20 19:51:44.703853 containerd[1551]: time="2025-06-20T19:51:44.703792085Z" level=info msg="StartContainer for \"94197a17d08b527f8121d4460791ff7be4e5d50461006097f05e2a3fc5185ed0\"" Jun 20 19:51:44.707233 containerd[1551]: time="2025-06-20T19:51:44.706860437Z" level=info msg="connecting to shim 94197a17d08b527f8121d4460791ff7be4e5d50461006097f05e2a3fc5185ed0" address="unix:///run/containerd/s/48568cbf66f6fb64e95fee85e80a1e1b21c82c29037b622e85847ca98e1001e1" protocol=ttrpc version=3 Jun 20 19:51:44.750500 systemd[1]: Started cri-containerd-94197a17d08b527f8121d4460791ff7be4e5d50461006097f05e2a3fc5185ed0.scope - libcontainer container 94197a17d08b527f8121d4460791ff7be4e5d50461006097f05e2a3fc5185ed0. Jun 20 19:51:45.189778 kubelet[2815]: I0620 19:51:45.189103 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:51:45.405952 containerd[1551]: time="2025-06-20T19:51:45.405660418Z" level=info msg="StartContainer for \"94197a17d08b527f8121d4460791ff7be4e5d50461006097f05e2a3fc5185ed0\" returns successfully" Jun 20 19:51:45.713698 kubelet[2815]: I0620 19:51:45.713394 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:51:45.716399 containerd[1551]: time="2025-06-20T19:51:45.715860390Z" level=info msg="StopContainer for \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" with timeout 30 (s)" Jun 20 19:51:45.718580 containerd[1551]: time="2025-06-20T19:51:45.718493603Z" level=info msg="Stop container \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" with signal terminated" Jun 20 19:51:45.788498 systemd[1]: cri-containerd-755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1.scope: Deactivated successfully. Jun 20 19:51:45.788897 systemd[1]: cri-containerd-755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1.scope: Consumed 1.483s CPU time, 45.8M memory peak. Jun 20 19:51:45.797791 containerd[1551]: time="2025-06-20T19:51:45.797710216Z" level=info msg="received exit event container_id:\"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" id:\"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" pid:5360 exit_status:1 exited_at:{seconds:1750449105 nanos:796273288}" Jun 20 19:51:45.798189 containerd[1551]: time="2025-06-20T19:51:45.797732989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" id:\"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" pid:5360 exit_status:1 exited_at:{seconds:1750449105 nanos:796273288}" Jun 20 19:51:45.844329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1-rootfs.mount: Deactivated successfully. Jun 20 19:51:45.976242 systemd[1]: Created slice kubepods-besteffort-poda9aeed66_1971_4d59_af93_b94432e81db7.slice - libcontainer container kubepods-besteffort-poda9aeed66_1971_4d59_af93_b94432e81db7.slice. Jun 20 19:51:46.008934 kubelet[2815]: I0620 19:51:46.008840 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8675f76d8f-fbgz6" podStartSLOduration=3.248177351 podStartE2EDuration="35.008795658s" podCreationTimestamp="2025-06-20 19:51:11 +0000 UTC" firstStartedPulling="2025-06-20 19:51:12.864315019 +0000 UTC m=+53.879499768" lastFinishedPulling="2025-06-20 19:51:44.624933315 +0000 UTC m=+85.640118075" observedRunningTime="2025-06-20 19:51:46.008315353 +0000 UTC m=+87.023500112" watchObservedRunningTime="2025-06-20 19:51:46.008795658 +0000 UTC m=+87.023980427" Jun 20 19:51:46.011404 containerd[1551]: time="2025-06-20T19:51:46.011304295Z" level=info msg="StopContainer for \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" returns successfully" Jun 20 19:51:46.012464 containerd[1551]: time="2025-06-20T19:51:46.012340378Z" level=info msg="StopPodSandbox for \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\"" Jun 20 19:51:46.013161 containerd[1551]: time="2025-06-20T19:51:46.013136888Z" level=info msg="Container to stop \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:51:46.033159 systemd[1]: cri-containerd-50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712.scope: Deactivated successfully. Jun 20 19:51:46.035275 containerd[1551]: time="2025-06-20T19:51:46.034954564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" id:\"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" pid:4921 exit_status:137 exited_at:{seconds:1750449106 nanos:33498029}" Jun 20 19:51:46.098131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712-rootfs.mount: Deactivated successfully. Jun 20 19:51:46.098461 containerd[1551]: time="2025-06-20T19:51:46.098398644Z" level=info msg="shim disconnected" id=50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712 namespace=k8s.io Jun 20 19:51:46.098461 containerd[1551]: time="2025-06-20T19:51:46.098443048Z" level=warning msg="cleaning up after shim disconnected" id=50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712 namespace=k8s.io Jun 20 19:51:46.098639 containerd[1551]: time="2025-06-20T19:51:46.098453347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:51:46.132576 containerd[1551]: time="2025-06-20T19:51:46.132497654Z" level=info msg="received exit event sandbox_id:\"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" exit_status:137 exited_at:{seconds:1750449106 nanos:33498029}" Jun 20 19:51:46.137244 kubelet[2815]: I0620 19:51:46.135945 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a9aeed66-1971-4d59-af93-b94432e81db7-calico-apiserver-certs\") pod \"calico-apiserver-56bd6d945d-jt7z9\" (UID: \"a9aeed66-1971-4d59-af93-b94432e81db7\") " pod="calico-apiserver/calico-apiserver-56bd6d945d-jt7z9" Jun 20 19:51:46.137550 kubelet[2815]: I0620 19:51:46.137496 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvl2\" (UniqueName: \"kubernetes.io/projected/a9aeed66-1971-4d59-af93-b94432e81db7-kube-api-access-chvl2\") pod \"calico-apiserver-56bd6d945d-jt7z9\" (UID: \"a9aeed66-1971-4d59-af93-b94432e81db7\") " pod="calico-apiserver/calico-apiserver-56bd6d945d-jt7z9" Jun 20 19:51:46.138792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712-shm.mount: Deactivated successfully. Jun 20 19:51:46.218789 systemd-networkd[1443]: caliedd4e94b734: Link DOWN Jun 20 19:51:46.219099 systemd-networkd[1443]: caliedd4e94b734: Lost carrier Jun 20 19:51:46.284408 containerd[1551]: time="2025-06-20T19:51:46.283691528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bd6d945d-jt7z9,Uid:a9aeed66-1971-4d59-af93-b94432e81db7,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.214 [INFO][5635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.214 [INFO][5635] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" iface="eth0" netns="/var/run/netns/cni-53e56eee-3c4a-9b5e-f48b-571e04b9fdf2" Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.215 [INFO][5635] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" iface="eth0" netns="/var/run/netns/cni-53e56eee-3c4a-9b5e-f48b-571e04b9fdf2" Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.225 [INFO][5635] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" after=10.736201ms iface="eth0" netns="/var/run/netns/cni-53e56eee-3c4a-9b5e-f48b-571e04b9fdf2" Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.225 [INFO][5635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.225 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.329 [INFO][5642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.330 [INFO][5642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.330 [INFO][5642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.414 [INFO][5642] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.415 [INFO][5642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.417 [INFO][5642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:46.424363 containerd[1551]: 2025-06-20 19:51:46.420 [INFO][5635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:51:46.426381 containerd[1551]: time="2025-06-20T19:51:46.424672717Z" level=info msg="TearDown network for sandbox \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" successfully" Jun 20 19:51:46.426381 containerd[1551]: time="2025-06-20T19:51:46.424706671Z" level=info msg="StopPodSandbox for \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" returns successfully" Jun 20 19:51:46.541638 kubelet[2815]: I0620 19:51:46.540979 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zh6\" (UniqueName: \"kubernetes.io/projected/a4c109fd-2fe5-4963-8e66-cb4e40a83c1d-kube-api-access-x7zh6\") pod \"a4c109fd-2fe5-4963-8e66-cb4e40a83c1d\" (UID: \"a4c109fd-2fe5-4963-8e66-cb4e40a83c1d\") " Jun 20 19:51:46.544124 kubelet[2815]: I0620 19:51:46.541102 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4c109fd-2fe5-4963-8e66-cb4e40a83c1d-calico-apiserver-certs\") pod \"a4c109fd-2fe5-4963-8e66-cb4e40a83c1d\" (UID: \"a4c109fd-2fe5-4963-8e66-cb4e40a83c1d\") " Jun 20 19:51:46.549163 kubelet[2815]: I0620 19:51:46.549061 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c109fd-2fe5-4963-8e66-cb4e40a83c1d-kube-api-access-x7zh6" (OuterVolumeSpecName: "kube-api-access-x7zh6") pod "a4c109fd-2fe5-4963-8e66-cb4e40a83c1d" (UID: "a4c109fd-2fe5-4963-8e66-cb4e40a83c1d"). InnerVolumeSpecName "kube-api-access-x7zh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:51:46.553589 kubelet[2815]: I0620 19:51:46.553477 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c109fd-2fe5-4963-8e66-cb4e40a83c1d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "a4c109fd-2fe5-4963-8e66-cb4e40a83c1d" (UID: "a4c109fd-2fe5-4963-8e66-cb4e40a83c1d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:51:46.572241 systemd-networkd[1443]: cali72f9000033d: Link UP Jun 20 19:51:46.575861 systemd-networkd[1443]: cali72f9000033d: Gained carrier Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.350 [INFO][5655] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0 calico-apiserver-56bd6d945d- calico-apiserver a9aeed66-1971-4d59-af93-b94432e81db7 1174 0 2025-06-20 19:51:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56bd6d945d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-1-0-0-4524070979.novalocal calico-apiserver-56bd6d945d-jt7z9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72f9000033d [] [] }} ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-jt7z9" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.350 [INFO][5655] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-jt7z9" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.395 [INFO][5669] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" HandleID="k8s-pod-network.a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.395 [INFO][5669] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" HandleID="k8s-pod-network.a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-1-0-0-4524070979.novalocal", "pod":"calico-apiserver-56bd6d945d-jt7z9", "timestamp":"2025-06-20 19:51:46.395313569 +0000 UTC"}, Hostname:"ci-4344-1-0-0-4524070979.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.395 [INFO][5669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.417 [INFO][5669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.417 [INFO][5669] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-0-4524070979.novalocal' Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.440 [INFO][5669] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.451 [INFO][5669] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.521 [INFO][5669] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.524 [INFO][5669] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.529 [INFO][5669] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.529 [INFO][5669] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.532 [INFO][5669] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2 Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.547 [INFO][5669] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.558 [INFO][5669] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.202/26] block=192.168.47.192/26 handle="k8s-pod-network.a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.559 [INFO][5669] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.202/26] handle="k8s-pod-network.a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" host="ci-4344-1-0-0-4524070979.novalocal" Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.559 [INFO][5669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:46.597211 containerd[1551]: 2025-06-20 19:51:46.559 [INFO][5669] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.202/26] IPv6=[] ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" HandleID="k8s-pod-network.a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0" Jun 20 19:51:46.597995 containerd[1551]: 2025-06-20 19:51:46.563 [INFO][5655] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-jt7z9" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0", GenerateName:"calico-apiserver-56bd6d945d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9aeed66-1971-4d59-af93-b94432e81db7", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bd6d945d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"", Pod:"calico-apiserver-56bd6d945d-jt7z9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f9000033d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:46.597995 containerd[1551]: 2025-06-20 19:51:46.563 [INFO][5655] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.202/32] ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-jt7z9" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0" Jun 20 19:51:46.597995 containerd[1551]: 2025-06-20 19:51:46.564 [INFO][5655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72f9000033d ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-jt7z9" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0" Jun 20 19:51:46.597995 containerd[1551]: 2025-06-20 19:51:46.577 [INFO][5655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-jt7z9" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0" Jun 20 19:51:46.597995 containerd[1551]: 2025-06-20 19:51:46.577 [INFO][5655] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-jt7z9" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0", GenerateName:"calico-apiserver-56bd6d945d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9aeed66-1971-4d59-af93-b94432e81db7", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bd6d945d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-0-4524070979.novalocal", ContainerID:"a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2", Pod:"calico-apiserver-56bd6d945d-jt7z9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f9000033d", MAC:"9e:0d:7b:2f:fe:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:51:46.597995 containerd[1551]: 2025-06-20 19:51:46.594 [INFO][5655] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" Namespace="calico-apiserver" Pod="calico-apiserver-56bd6d945d-jt7z9" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--56bd6d945d--jt7z9-eth0" Jun 20 19:51:46.642094 containerd[1551]: time="2025-06-20T19:51:46.641494447Z" level=info msg="connecting to shim a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2" address="unix:///run/containerd/s/54394fb9514e8d5ddaeeda011d55b35a3587d97fbeb7939189125bc43b6d2c38" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:51:46.642381 kubelet[2815]: I0620 19:51:46.642312 2815 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4c109fd-2fe5-4963-8e66-cb4e40a83c1d-calico-apiserver-certs\") on node \"ci-4344-1-0-0-4524070979.novalocal\" DevicePath \"\"" Jun 20 19:51:46.642381 kubelet[2815]: I0620 19:51:46.642379 2815 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x7zh6\" (UniqueName: \"kubernetes.io/projected/a4c109fd-2fe5-4963-8e66-cb4e40a83c1d-kube-api-access-x7zh6\") on node \"ci-4344-1-0-0-4524070979.novalocal\" DevicePath \"\"" Jun 20 19:51:46.668382 systemd[1]: Started cri-containerd-a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2.scope - libcontainer container a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2. Jun 20 19:51:46.723930 containerd[1551]: time="2025-06-20T19:51:46.723807025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bd6d945d-jt7z9,Uid:a9aeed66-1971-4d59-af93-b94432e81db7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2\"" Jun 20 19:51:46.731803 containerd[1551]: time="2025-06-20T19:51:46.731761526Z" level=info msg="CreateContainer within sandbox \"a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:51:46.747186 containerd[1551]: time="2025-06-20T19:51:46.747040919Z" level=info msg="Container f178efcf894c3b6d6d4b2f157b17b60b2490db906baf5ff18af1191ddacd4558: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:51:46.758765 containerd[1551]: time="2025-06-20T19:51:46.758696033Z" level=info msg="CreateContainer within sandbox \"a7166000519e7ba65592468339c120eaf4a8d2fa759c2927a916f9630b53c9a2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f178efcf894c3b6d6d4b2f157b17b60b2490db906baf5ff18af1191ddacd4558\"" Jun 20 19:51:46.759635 containerd[1551]: time="2025-06-20T19:51:46.759607812Z" level=info msg="StartContainer for \"f178efcf894c3b6d6d4b2f157b17b60b2490db906baf5ff18af1191ddacd4558\"" Jun 20 19:51:46.760997 containerd[1551]: time="2025-06-20T19:51:46.760964879Z" level=info msg="connecting to shim f178efcf894c3b6d6d4b2f157b17b60b2490db906baf5ff18af1191ddacd4558" address="unix:///run/containerd/s/54394fb9514e8d5ddaeeda011d55b35a3587d97fbeb7939189125bc43b6d2c38" protocol=ttrpc version=3 Jun 20 19:51:46.784353 systemd[1]: Started cri-containerd-f178efcf894c3b6d6d4b2f157b17b60b2490db906baf5ff18af1191ddacd4558.scope - libcontainer container f178efcf894c3b6d6d4b2f157b17b60b2490db906baf5ff18af1191ddacd4558. Jun 20 19:51:46.852487 systemd[1]: run-netns-cni\x2d53e56eee\x2d3c4a\x2d9b5e\x2df48b\x2d571e04b9fdf2.mount: Deactivated successfully. Jun 20 19:51:46.852609 systemd[1]: var-lib-kubelet-pods-a4c109fd\x2d2fe5\x2d4963\x2d8e66\x2dcb4e40a83c1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx7zh6.mount: Deactivated successfully. Jun 20 19:51:46.852708 systemd[1]: var-lib-kubelet-pods-a4c109fd\x2d2fe5\x2d4963\x2d8e66\x2dcb4e40a83c1d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:51:46.863316 containerd[1551]: time="2025-06-20T19:51:46.863252820Z" level=info msg="StartContainer for \"f178efcf894c3b6d6d4b2f157b17b60b2490db906baf5ff18af1191ddacd4558\" returns successfully" Jun 20 19:51:46.889641 kubelet[2815]: I0620 19:51:46.889597 2815 scope.go:117] "RemoveContainer" containerID="755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1" Jun 20 19:51:46.895262 containerd[1551]: time="2025-06-20T19:51:46.895219010Z" level=info msg="RemoveContainer for \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\"" Jun 20 19:51:46.922695 systemd[1]: Removed slice kubepods-besteffort-poda4c109fd_2fe5_4963_8e66_cb4e40a83c1d.slice - libcontainer container kubepods-besteffort-poda4c109fd_2fe5_4963_8e66_cb4e40a83c1d.slice. Jun 20 19:51:46.922816 systemd[1]: kubepods-besteffort-poda4c109fd_2fe5_4963_8e66_cb4e40a83c1d.slice: Consumed 1.518s CPU time, 46M memory peak. Jun 20 19:51:46.929433 containerd[1551]: time="2025-06-20T19:51:46.929250112Z" level=info msg="RemoveContainer for \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" returns successfully" Jun 20 19:51:46.930480 kubelet[2815]: I0620 19:51:46.930443 2815 scope.go:117] "RemoveContainer" containerID="755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1" Jun 20 19:51:46.930852 containerd[1551]: time="2025-06-20T19:51:46.930796296Z" level=error msg="ContainerStatus for \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\": not found" Jun 20 19:51:46.931035 kubelet[2815]: E0620 19:51:46.931001 2815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\": not found" containerID="755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1" Jun 20 19:51:46.931135 kubelet[2815]: I0620 19:51:46.931055 2815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1"} err="failed to get container status \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"755f4ce2e485856267c9eb4c705193cf9dcd306990f9e97b4a0f1e2d9ade48f1\": not found" Jun 20 19:51:46.971745 kubelet[2815]: I0620 19:51:46.971681 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56bd6d945d-jt7z9" podStartSLOduration=1.9716620470000001 podStartE2EDuration="1.971662047s" podCreationTimestamp="2025-06-20 19:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:51:46.938614839 +0000 UTC m=+87.953799619" watchObservedRunningTime="2025-06-20 19:51:46.971662047 +0000 UTC m=+87.986846816" Jun 20 19:51:47.155571 kubelet[2815]: I0620 19:51:47.154416 2815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4c109fd-2fe5-4963-8e66-cb4e40a83c1d" path="/var/lib/kubelet/pods/a4c109fd-2fe5-4963-8e66-cb4e40a83c1d/volumes" Jun 20 19:51:47.825430 containerd[1551]: time="2025-06-20T19:51:47.825142437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"cdb5fda8bb0e4ddd4902411ddb50e54a36b6bf33358b488a5a8d1a801ba0e780\" pid:5786 exited_at:{seconds:1750449107 nanos:824815030}" Jun 20 19:51:48.395438 systemd-networkd[1443]: cali72f9000033d: Gained IPv6LL Jun 20 19:51:48.977198 update_engine[1538]: I20250620 19:51:48.976761 1538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:51:48.977198 update_engine[1538]: I20250620 19:51:48.977201 1538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:51:48.977874 update_engine[1538]: I20250620 19:51:48.977524 1538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:51:48.982857 update_engine[1538]: E20250620 19:51:48.982780 1538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:51:48.982961 update_engine[1538]: I20250620 19:51:48.982894 1538 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 19:51:49.627163 containerd[1551]: time="2025-06-20T19:51:49.626934415Z" level=info msg="StopContainer for \"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\" with timeout 30 (s)" Jun 20 19:51:49.630403 containerd[1551]: time="2025-06-20T19:51:49.630365962Z" level=info msg="Stop container \"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\" with signal terminated" Jun 20 19:51:49.683745 systemd[1]: cri-containerd-8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf.scope: Deactivated successfully. Jun 20 19:51:49.684122 systemd[1]: cri-containerd-8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf.scope: Consumed 1.296s CPU time, 57.5M memory peak. Jun 20 19:51:49.690942 containerd[1551]: time="2025-06-20T19:51:49.690281792Z" level=info msg="received exit event container_id:\"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\" id:\"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\" pid:5075 exit_status:1 exited_at:{seconds:1750449109 nanos:689411492}" Jun 20 19:51:49.691252 containerd[1551]: time="2025-06-20T19:51:49.691225320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\" id:\"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\" pid:5075 exit_status:1 exited_at:{seconds:1750449109 nanos:689411492}" Jun 20 19:51:49.768050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf-rootfs.mount: Deactivated successfully. Jun 20 19:51:49.802291 containerd[1551]: time="2025-06-20T19:51:49.802209628Z" level=info msg="StopContainer for \"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\" returns successfully" Jun 20 19:51:49.804229 containerd[1551]: time="2025-06-20T19:51:49.803917407Z" level=info msg="StopPodSandbox for \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\"" Jun 20 19:51:49.804672 containerd[1551]: time="2025-06-20T19:51:49.804542384Z" level=info msg="Container to stop \"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:51:49.828874 systemd[1]: cri-containerd-5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd.scope: Deactivated successfully. Jun 20 19:51:49.839106 containerd[1551]: time="2025-06-20T19:51:49.839037821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" id:\"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" pid:4309 exit_status:137 exited_at:{seconds:1750449109 nanos:836766190}" Jun 20 19:51:49.922931 containerd[1551]: time="2025-06-20T19:51:49.922388344Z" level=info msg="shim disconnected" id=5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd namespace=k8s.io Jun 20 19:51:49.922931 containerd[1551]: time="2025-06-20T19:51:49.922424983Z" level=warning msg="cleaning up after shim disconnected" id=5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd namespace=k8s.io Jun 20 19:51:49.922931 containerd[1551]: time="2025-06-20T19:51:49.922561400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:51:49.923640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd-rootfs.mount: Deactivated successfully. Jun 20 19:51:49.954580 containerd[1551]: time="2025-06-20T19:51:49.953328721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"40b686f26a1bd4f119da0c66b6973acfdb5260830ffc872916cca39f57398305\" pid:5830 exited_at:{seconds:1750449109 nanos:861290195}" Jun 20 19:51:49.955378 containerd[1551]: time="2025-06-20T19:51:49.955335873Z" level=info msg="received exit event sandbox_id:\"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" exit_status:137 exited_at:{seconds:1750449109 nanos:836766190}" Jun 20 19:51:49.958305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd-shm.mount: Deactivated successfully. Jun 20 19:51:50.045141 systemd-networkd[1443]: calie3bb2900ef3: Link DOWN Jun 20 19:51:50.045152 systemd-networkd[1443]: calie3bb2900ef3: Lost carrier Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.042 [INFO][5886] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.043 [INFO][5886] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" iface="eth0" netns="/var/run/netns/cni-71f823c8-8cfd-03a0-0fad-303830a30af2" Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.043 [INFO][5886] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" iface="eth0" netns="/var/run/netns/cni-71f823c8-8cfd-03a0-0fad-303830a30af2" Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.051 [INFO][5886] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" after=7.659043ms iface="eth0" netns="/var/run/netns/cni-71f823c8-8cfd-03a0-0fad-303830a30af2" Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.051 [INFO][5886] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.051 [INFO][5886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.097 [INFO][5893] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.098 [INFO][5893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.098 [INFO][5893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.184 [INFO][5893] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.184 [INFO][5893] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.187 [INFO][5893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:51:50.193463 containerd[1551]: 2025-06-20 19:51:50.190 [INFO][5886] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:51:50.198370 containerd[1551]: time="2025-06-20T19:51:50.198228891Z" level=info msg="TearDown network for sandbox \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" successfully" Jun 20 19:51:50.198370 containerd[1551]: time="2025-06-20T19:51:50.198303061Z" level=info msg="StopPodSandbox for \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" returns successfully" Jun 20 19:51:50.200842 systemd[1]: run-netns-cni\x2d71f823c8\x2d8cfd\x2d03a0\x2d0fad\x2d303830a30af2.mount: Deactivated successfully. Jun 20 19:51:50.376350 kubelet[2815]: I0620 19:51:50.376267 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31d1e1de-5d04-4fb1-a1dd-f2993de9970d-calico-apiserver-certs\") pod \"31d1e1de-5d04-4fb1-a1dd-f2993de9970d\" (UID: \"31d1e1de-5d04-4fb1-a1dd-f2993de9970d\") " Jun 20 19:51:50.376996 kubelet[2815]: I0620 19:51:50.376358 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb2hv\" (UniqueName: \"kubernetes.io/projected/31d1e1de-5d04-4fb1-a1dd-f2993de9970d-kube-api-access-lb2hv\") pod \"31d1e1de-5d04-4fb1-a1dd-f2993de9970d\" (UID: \"31d1e1de-5d04-4fb1-a1dd-f2993de9970d\") " Jun 20 19:51:50.387620 kubelet[2815]: I0620 19:51:50.385386 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d1e1de-5d04-4fb1-a1dd-f2993de9970d-kube-api-access-lb2hv" (OuterVolumeSpecName: "kube-api-access-lb2hv") pod "31d1e1de-5d04-4fb1-a1dd-f2993de9970d" (UID: "31d1e1de-5d04-4fb1-a1dd-f2993de9970d"). InnerVolumeSpecName "kube-api-access-lb2hv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:51:50.387620 kubelet[2815]: I0620 19:51:50.385612 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d1e1de-5d04-4fb1-a1dd-f2993de9970d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "31d1e1de-5d04-4fb1-a1dd-f2993de9970d" (UID: "31d1e1de-5d04-4fb1-a1dd-f2993de9970d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:51:50.386612 systemd[1]: var-lib-kubelet-pods-31d1e1de\x2d5d04\x2d4fb1\x2da1dd\x2df2993de9970d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlb2hv.mount: Deactivated successfully. Jun 20 19:51:50.477064 kubelet[2815]: I0620 19:51:50.476925 2815 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31d1e1de-5d04-4fb1-a1dd-f2993de9970d-calico-apiserver-certs\") on node \"ci-4344-1-0-0-4524070979.novalocal\" DevicePath \"\"" Jun 20 19:51:50.477064 kubelet[2815]: I0620 19:51:50.476965 2815 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lb2hv\" (UniqueName: \"kubernetes.io/projected/31d1e1de-5d04-4fb1-a1dd-f2993de9970d-kube-api-access-lb2hv\") on node \"ci-4344-1-0-0-4524070979.novalocal\" DevicePath \"\"" Jun 20 19:51:50.767058 systemd[1]: var-lib-kubelet-pods-31d1e1de\x2d5d04\x2d4fb1\x2da1dd\x2df2993de9970d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:51:50.926865 kubelet[2815]: I0620 19:51:50.926789 2815 scope.go:117] "RemoveContainer" containerID="8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf" Jun 20 19:51:50.933124 containerd[1551]: time="2025-06-20T19:51:50.933033428Z" level=info msg="RemoveContainer for \"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\"" Jun 20 19:51:50.940909 systemd[1]: Removed slice kubepods-besteffort-pod31d1e1de_5d04_4fb1_a1dd_f2993de9970d.slice - libcontainer container kubepods-besteffort-pod31d1e1de_5d04_4fb1_a1dd_f2993de9970d.slice. Jun 20 19:51:50.941049 systemd[1]: kubepods-besteffort-pod31d1e1de_5d04_4fb1_a1dd_f2993de9970d.slice: Consumed 1.332s CPU time, 57.7M memory peak. Jun 20 19:51:50.944117 containerd[1551]: time="2025-06-20T19:51:50.943915205Z" level=info msg="RemoveContainer for \"8989618f11ba675f3cafa9329f315e7dcafdb139ad37dd00f63311a529a037cf\" returns successfully" Jun 20 19:51:51.151357 kubelet[2815]: I0620 19:51:51.151291 2815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d1e1de-5d04-4fb1-a1dd-f2993de9970d" path="/var/lib/kubelet/pods/31d1e1de-5d04-4fb1-a1dd-f2993de9970d/volumes" Jun 20 19:51:58.836497 systemd[1]: Started sshd@9-172.24.4.123:22-172.24.4.1:59770.service - OpenSSH per-connection server daemon (172.24.4.1:59770). Jun 20 19:51:58.985540 update_engine[1538]: I20250620 19:51:58.985400 1538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:51:58.986074 update_engine[1538]: I20250620 19:51:58.985922 1538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:51:58.986488 update_engine[1538]: I20250620 19:51:58.986447 1538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:51:58.991433 update_engine[1538]: E20250620 19:51:58.991355 1538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:51:58.991648 update_engine[1538]: I20250620 19:51:58.991484 1538 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 20 19:52:00.080055 sshd[5925]: Accepted publickey for core from 172.24.4.1 port 59770 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:52:00.083886 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:52:00.096903 systemd-logind[1537]: New session 12 of user core. Jun 20 19:52:00.104442 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:52:00.905585 sshd[5929]: Connection closed by 172.24.4.1 port 59770 Jun 20 19:52:00.906527 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Jun 20 19:52:00.911550 systemd[1]: sshd@9-172.24.4.123:22-172.24.4.1:59770.service: Deactivated successfully. Jun 20 19:52:00.915243 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:52:00.918238 systemd-logind[1537]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:52:00.920488 systemd-logind[1537]: Removed session 12. Jun 20 19:52:05.930680 systemd[1]: Started sshd@10-172.24.4.123:22-172.24.4.1:60650.service - OpenSSH per-connection server daemon (172.24.4.1:60650). Jun 20 19:52:07.320766 sshd[5945]: Accepted publickey for core from 172.24.4.1 port 60650 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:52:07.326056 sshd-session[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:52:07.338643 systemd-logind[1537]: New session 13 of user core. Jun 20 19:52:07.346490 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:52:08.037158 sshd[5947]: Connection closed by 172.24.4.1 port 60650 Jun 20 19:52:08.040417 sshd-session[5945]: pam_unix(sshd:session): session closed for user core Jun 20 19:52:08.044642 systemd[1]: sshd@10-172.24.4.123:22-172.24.4.1:60650.service: Deactivated successfully. Jun 20 19:52:08.051999 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:52:08.055110 systemd-logind[1537]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:52:08.058943 systemd-logind[1537]: Removed session 13. Jun 20 19:52:08.976423 update_engine[1538]: I20250620 19:52:08.976261 1538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:52:08.977055 update_engine[1538]: I20250620 19:52:08.976672 1538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:52:08.977282 update_engine[1538]: I20250620 19:52:08.977236 1538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:52:08.983478 update_engine[1538]: E20250620 19:52:08.983402 1538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:52:08.983478 update_engine[1538]: I20250620 19:52:08.983485 1538 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 19:52:08.983715 update_engine[1538]: I20250620 19:52:08.983507 1538 omaha_request_action.cc:617] Omaha request response: Jun 20 19:52:09.195593 update_engine[1538]: E20250620 19:52:09.195487 1538 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 20 19:52:09.195823 update_engine[1538]: I20250620 19:52:09.195768 1538 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 20 19:52:09.195823 update_engine[1538]: I20250620 19:52:09.195778 1538 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:52:09.195823 update_engine[1538]: I20250620 19:52:09.195788 1538 update_attempter.cc:306] Processing Done. Jun 20 19:52:09.196418 update_engine[1538]: E20250620 19:52:09.195845 1538 update_attempter.cc:619] Update failed. Jun 20 19:52:09.196418 update_engine[1538]: I20250620 19:52:09.195859 1538 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 20 19:52:09.196418 update_engine[1538]: I20250620 19:52:09.195868 1538 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 20 19:52:09.196418 update_engine[1538]: I20250620 19:52:09.195873 1538 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 20 19:52:09.196418 update_engine[1538]: I20250620 19:52:09.196049 1538 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 19:52:09.196418 update_engine[1538]: I20250620 19:52:09.196115 1538 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 19:52:09.196418 update_engine[1538]: I20250620 19:52:09.196122 1538 omaha_request_action.cc:272] Request: Jun 20 19:52:09.196418 update_engine[1538]: Jun 20 19:52:09.196418 update_engine[1538]: Jun 20 19:52:09.196418 update_engine[1538]: Jun 20 19:52:09.196418 update_engine[1538]: Jun 20 19:52:09.196418 update_engine[1538]: Jun 20 19:52:09.196418 update_engine[1538]: Jun 20 19:52:09.196418 update_engine[1538]: I20250620 19:52:09.196130 1538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:52:09.196915 update_engine[1538]: I20250620 19:52:09.196646 1538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:52:09.196958 update_engine[1538]: I20250620 19:52:09.196926 1538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:52:09.198406 locksmithd[1590]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 20 19:52:09.201964 update_engine[1538]: E20250620 19:52:09.201917 1538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:52:09.202105 update_engine[1538]: I20250620 19:52:09.201972 1538 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 19:52:09.202105 update_engine[1538]: I20250620 19:52:09.201982 1538 omaha_request_action.cc:617] Omaha request response: Jun 20 19:52:09.202105 update_engine[1538]: I20250620 19:52:09.201989 1538 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:52:09.202105 update_engine[1538]: I20250620 19:52:09.201994 1538 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:52:09.202105 update_engine[1538]: I20250620 19:52:09.201998 1538 update_attempter.cc:306] Processing Done. Jun 20 19:52:09.202105 update_engine[1538]: I20250620 19:52:09.202004 1538 update_attempter.cc:310] Error event sent. Jun 20 19:52:09.202105 update_engine[1538]: I20250620 19:52:09.202023 1538 update_check_scheduler.cc:74] Next update check in 46m49s Jun 20 19:52:09.203254 locksmithd[1590]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 20 19:52:10.014858 containerd[1551]: time="2025-06-20T19:52:10.014736603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"fd02ee44e8b138e68236ceebdcd067c24a0014a33bb2b563224e2227e7e5da5f\" pid:5971 exited_at:{seconds:1750449130 nanos:13071546}" Jun 20 19:52:12.853878 containerd[1551]: time="2025-06-20T19:52:12.853623393Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" id:\"c2f4a5ec47e51cbd6704ab93e96c4983ec378e53230ca02c5da2fc9875e34cee\" pid:5997 exited_at:{seconds:1750449132 nanos:851406566}" Jun 20 19:52:13.052844 systemd[1]: Started sshd@11-172.24.4.123:22-172.24.4.1:60666.service - OpenSSH per-connection server daemon (172.24.4.1:60666). Jun 20 19:52:14.259145 sshd[6010]: Accepted publickey for core from 172.24.4.1 port 60666 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:52:14.260943 sshd-session[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:52:14.273161 systemd-logind[1537]: New session 14 of user core. Jun 20 19:52:14.276364 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:52:15.248392 sshd[6012]: Connection closed by 172.24.4.1 port 60666 Jun 20 19:52:15.249975 sshd-session[6010]: pam_unix(sshd:session): session closed for user core Jun 20 19:52:15.262063 systemd[1]: sshd@11-172.24.4.123:22-172.24.4.1:60666.service: Deactivated successfully. Jun 20 19:52:15.265732 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:52:15.269324 systemd-logind[1537]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:52:15.274521 systemd[1]: Started sshd@12-172.24.4.123:22-172.24.4.1:44302.service - OpenSSH per-connection server daemon (172.24.4.1:44302). Jun 20 19:52:15.279550 systemd-logind[1537]: Removed session 14. Jun 20 19:52:16.710019 sshd[6025]: Accepted publickey for core from 172.24.4.1 port 44302 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:52:16.712344 sshd-session[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:52:16.720145 systemd-logind[1537]: New session 15 of user core. Jun 20 19:52:16.726468 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:52:17.397531 sshd[6027]: Connection closed by 172.24.4.1 port 44302 Jun 20 19:52:17.400358 sshd-session[6025]: pam_unix(sshd:session): session closed for user core Jun 20 19:52:17.412358 systemd[1]: sshd@12-172.24.4.123:22-172.24.4.1:44302.service: Deactivated successfully. Jun 20 19:52:17.415775 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:52:17.419654 systemd-logind[1537]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:52:17.427091 systemd[1]: Started sshd@13-172.24.4.123:22-172.24.4.1:44304.service - OpenSSH per-connection server daemon (172.24.4.1:44304). Jun 20 19:52:17.431575 systemd-logind[1537]: Removed session 15. Jun 20 19:52:18.642388 sshd[6037]: Accepted publickey for core from 172.24.4.1 port 44304 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:52:18.646286 sshd-session[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:52:18.654876 systemd-logind[1537]: New session 16 of user core. Jun 20 19:52:18.660360 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:52:19.183360 containerd[1551]: time="2025-06-20T19:52:19.183289926Z" level=info msg="StopPodSandbox for \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\"" Jun 20 19:52:19.312511 sshd[6039]: Connection closed by 172.24.4.1 port 44304 Jun 20 19:52:19.312837 sshd-session[6037]: pam_unix(sshd:session): session closed for user core Jun 20 19:52:19.322051 systemd[1]: sshd@13-172.24.4.123:22-172.24.4.1:44304.service: Deactivated successfully. Jun 20 19:52:19.325486 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:52:19.331120 systemd-logind[1537]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:52:19.334876 systemd-logind[1537]: Removed session 16. Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.263 [WARNING][6057] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.263 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.263 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" iface="eth0" netns="" Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.263 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.263 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.303 [INFO][6064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.304 [INFO][6064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.304 [INFO][6064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.320 [WARNING][6064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.326 [INFO][6064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.335 [INFO][6064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:52:19.339995 containerd[1551]: 2025-06-20 19:52:19.337 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:52:19.340546 containerd[1551]: time="2025-06-20T19:52:19.340161548Z" level=info msg="TearDown network for sandbox \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" successfully" Jun 20 19:52:19.340546 containerd[1551]: time="2025-06-20T19:52:19.340345023Z" level=info msg="StopPodSandbox for \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" returns successfully" Jun 20 19:52:19.342825 containerd[1551]: time="2025-06-20T19:52:19.342790712Z" level=info msg="RemovePodSandbox for \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\"" Jun 20 19:52:19.342922 containerd[1551]: time="2025-06-20T19:52:19.342864481Z" level=info msg="Forcibly stopping sandbox \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\"" Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.391 [WARNING][6081] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.391 [INFO][6081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.391 [INFO][6081] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" iface="eth0" netns="" Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.391 [INFO][6081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.391 [INFO][6081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.431 [INFO][6088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.434 [INFO][6088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.435 [INFO][6088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.454 [WARNING][6088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.454 [INFO][6088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" HandleID="k8s-pod-network.5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--4wrqp-eth0" Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.457 [INFO][6088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:52:19.462441 containerd[1551]: 2025-06-20 19:52:19.460 [INFO][6081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd" Jun 20 19:52:19.462441 containerd[1551]: time="2025-06-20T19:52:19.462395618Z" level=info msg="TearDown network for sandbox \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" successfully" Jun 20 19:52:19.467626 containerd[1551]: time="2025-06-20T19:52:19.467581882Z" level=info msg="Ensure that sandbox 5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd in task-service has been cleanup successfully" Jun 20 19:52:19.473217 containerd[1551]: time="2025-06-20T19:52:19.473150134Z" level=info msg="RemovePodSandbox \"5d982c8eddf652c9cdd8bca08a65dfbbf61e8a1a6f706638deb3f8c3821611bd\" returns successfully" Jun 20 19:52:19.475446 containerd[1551]: time="2025-06-20T19:52:19.475415283Z" level=info msg="StopPodSandbox for \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\"" Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.577 [WARNING][6104] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.578 [INFO][6104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.578 [INFO][6104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" iface="eth0" netns="" Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.578 [INFO][6104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.578 [INFO][6104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.622 [INFO][6129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.623 [INFO][6129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.624 [INFO][6129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.635 [WARNING][6129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.637 [INFO][6129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.640 [INFO][6129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:52:19.643422 containerd[1551]: 2025-06-20 19:52:19.641 [INFO][6104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:52:19.645316 containerd[1551]: time="2025-06-20T19:52:19.643472603Z" level=info msg="TearDown network for sandbox \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" successfully" Jun 20 19:52:19.645316 containerd[1551]: time="2025-06-20T19:52:19.643505556Z" level=info msg="StopPodSandbox for \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" returns successfully" Jun 20 19:52:19.645316 containerd[1551]: time="2025-06-20T19:52:19.644396825Z" level=info msg="RemovePodSandbox for \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\"" Jun 20 19:52:19.645316 containerd[1551]: time="2025-06-20T19:52:19.644444144Z" level=info msg="Forcibly stopping sandbox \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\"" Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.728 [WARNING][6143] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" WorkloadEndpoint="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.730 [INFO][6143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.730 [INFO][6143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" iface="eth0" netns="" Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.730 [INFO][6143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.730 [INFO][6143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.792 [INFO][6160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.792 [INFO][6160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.792 [INFO][6160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.812 [WARNING][6160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.812 [INFO][6160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" HandleID="k8s-pod-network.50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Workload="ci--4344--1--0--0--4524070979.novalocal-k8s-calico--apiserver--749bf4dccb--n8f2p-eth0" Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.815 [INFO][6160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:52:19.820837 containerd[1551]: 2025-06-20 19:52:19.817 [INFO][6143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712" Jun 20 19:52:19.821516 containerd[1551]: time="2025-06-20T19:52:19.820894771Z" level=info msg="TearDown network for sandbox \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" successfully" Jun 20 19:52:19.824189 containerd[1551]: time="2025-06-20T19:52:19.824076557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"ff8dd363ea7effbe96dc6b94d8f51677e8f13113b464e45f1bacbae8aff62d16\" pid:6121 exited_at:{seconds:1750449139 nanos:823343235}" Jun 20 19:52:19.828754 containerd[1551]: time="2025-06-20T19:52:19.828711671Z" level=info msg="Ensure that sandbox 50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712 in task-service has been cleanup successfully" Jun 20 19:52:19.834259 containerd[1551]: time="2025-06-20T19:52:19.834210002Z" level=info msg="RemovePodSandbox \"50be6093f275374bfe9b6eb18f1397751ef67ab352748a26f6a64d15c421d712\" returns successfully" Jun 20 19:52:19.866699 containerd[1551]: time="2025-06-20T19:52:19.866646947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"a095197cc4d6b8b70f6b55eee9c17f6d225a6c37540e2501ea968a04d4c2f673\" pid:6162 exited_at:{seconds:1750449139 nanos:866260329}" Jun 20 19:52:24.331343 systemd[1]: Started sshd@14-172.24.4.123:22-172.24.4.1:47428.service - OpenSSH per-connection server daemon (172.24.4.1:47428). Jun 20 19:52:25.650807 sshd[6182]: Accepted publickey for core from 172.24.4.1 port 47428 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:52:25.655095 sshd-session[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:52:25.669909 systemd-logind[1537]: New session 17 of user core. Jun 20 19:52:25.671595 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:52:26.475004 sshd[6186]: Connection closed by 172.24.4.1 port 47428 Jun 20 19:52:26.475785 sshd-session[6182]: pam_unix(sshd:session): session closed for user core Jun 20 19:52:26.485144 systemd-logind[1537]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:52:26.488752 systemd[1]: sshd@14-172.24.4.123:22-172.24.4.1:47428.service: Deactivated successfully. Jun 20 19:52:26.494378 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:52:26.504720 systemd-logind[1537]: Removed session 17. Jun 20 19:52:31.508403 systemd[1]: Started sshd@15-172.24.4.123:22-172.24.4.1:47440.service - OpenSSH per-connection server daemon (172.24.4.1:47440). Jun 20 19:52:32.462273 sshd[6226]: Accepted publickey for core from 172.24.4.1 port 47440 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:52:32.468297 sshd-session[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:52:32.478721 systemd-logind[1537]: New session 18 of user core. Jun 20 19:52:32.485481 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:52:33.076956 sshd[6228]: Connection closed by 172.24.4.1 port 47440 Jun 20 19:52:33.077953 sshd-session[6226]: pam_unix(sshd:session): session closed for user core Jun 20 19:52:33.083927 systemd[1]: sshd@15-172.24.4.123:22-172.24.4.1:47440.service: Deactivated successfully. Jun 20 19:52:33.088088 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:52:33.093284 systemd-logind[1537]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:52:33.096896 systemd-logind[1537]: Removed session 18. Jun 20 19:52:38.108923 systemd[1]: Started sshd@16-172.24.4.123:22-172.24.4.1:33940.service - OpenSSH per-connection server daemon (172.24.4.1:33940). Jun 20 19:52:39.381522 sshd[6246]: Accepted publickey for core from 172.24.4.1 port 33940 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:52:39.384934 sshd-session[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:52:39.401943 systemd-logind[1537]: New session 19 of user core. Jun 20 19:52:39.410701 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:52:40.293453 containerd[1551]: time="2025-06-20T19:52:40.291793890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"c4e2c6398cec2eec1d55a18bf67559279f549738c7dec4d22ceee815f5645ff4\" pid:6269 exited_at:{seconds:1750449160 nanos:286422750}" Jun 20 19:52:40.296832 sshd[6248]: Connection closed by 172.24.4.1 port 33940 Jun 20 19:52:40.298553 sshd-session[6246]: pam_unix(sshd:session): session closed for user core Jun 20 19:52:40.306732 systemd[1]: sshd@16-172.24.4.123:22-172.24.4.1:33940.service: Deactivated successfully. Jun 20 19:52:40.315246 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:52:40.325400 systemd-logind[1537]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:52:40.330481 systemd-logind[1537]: Removed session 19. Jun 20 19:52:42.865730 containerd[1551]: time="2025-06-20T19:52:42.865650764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" id:\"58167a3312c7cc4fbba485aeac6cc4160b73170996ae0261401204b566d02e31\" pid:6297 exited_at:{seconds:1750449162 nanos:865089196}" Jun 20 19:52:45.318963 systemd[1]: Started sshd@17-172.24.4.123:22-172.24.4.1:35834.service - OpenSSH per-connection server daemon (172.24.4.1:35834). Jun 20 19:52:46.620294 sshd[6309]: Accepted publickey for core from 172.24.4.1 port 35834 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:52:46.622513 sshd-session[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:52:46.631970 systemd-logind[1537]: New session 20 of user core. Jun 20 19:52:46.641836 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:52:47.400298 sshd[6311]: Connection closed by 172.24.4.1 port 35834 Jun 20 19:52:47.401581 sshd-session[6309]: pam_unix(sshd:session): session closed for user core Jun 20 19:52:47.410731 systemd[1]: sshd@17-172.24.4.123:22-172.24.4.1:35834.service: Deactivated successfully. Jun 20 19:52:47.416120 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:52:47.418089 systemd-logind[1537]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:52:47.422439 systemd[1]: Started sshd@18-172.24.4.123:22-172.24.4.1:35842.service - OpenSSH per-connection server daemon (172.24.4.1:35842). Jun 20 19:52:47.426465 systemd-logind[1537]: Removed session 20. Jun 20 19:52:47.877876 containerd[1551]: time="2025-06-20T19:52:47.877804837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"1334d2491f09c98eac2ccac27362c97d9be09973e1bd85fb41c6ca445bfb33ef\" pid:6350 exited_at:{seconds:1750449167 nanos:876754498}" Jun 20 19:52:48.642919 sshd[6323]: Accepted publickey for core from 172.24.4.1 port 35842 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:52:48.646316 sshd-session[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:52:48.657983 systemd-logind[1537]: New session 21 of user core. Jun 20 19:52:48.667559 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:52:49.787466 containerd[1551]: time="2025-06-20T19:52:49.787369410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"5ce838352b19d3eb5fdd0e1e5db1b80b6aa0f913c2aa9815d5740e1abdeca788\" pid:6382 exited_at:{seconds:1750449169 nanos:786575484}" Jun 20 19:52:57.566556 sshd[6361]: Connection closed by 172.24.4.1 port 35842 Jun 20 19:52:57.570011 sshd-session[6323]: pam_unix(sshd:session): session closed for user core Jun 20 19:52:57.580824 systemd[1]: sshd@18-172.24.4.123:22-172.24.4.1:35842.service: Deactivated successfully. Jun 20 19:52:57.584225 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:52:57.588381 systemd-logind[1537]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:52:57.594630 systemd[1]: Started sshd@19-172.24.4.123:22-172.24.4.1:34412.service - OpenSSH per-connection server daemon (172.24.4.1:34412). Jun 20 19:52:57.598952 systemd-logind[1537]: Removed session 21. Jun 20 19:53:00.973342 sshd[6402]: Accepted publickey for core from 172.24.4.1 port 34412 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:53:00.975047 sshd-session[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:53:00.983378 systemd-logind[1537]: New session 22 of user core. Jun 20 19:53:00.990388 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:53:03.365059 sshd[6404]: Connection closed by 172.24.4.1 port 34412 Jun 20 19:53:03.368741 sshd-session[6402]: pam_unix(sshd:session): session closed for user core Jun 20 19:53:03.380876 systemd[1]: sshd@19-172.24.4.123:22-172.24.4.1:34412.service: Deactivated successfully. Jun 20 19:53:03.392980 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:53:03.394471 systemd-logind[1537]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:53:03.398484 systemd[1]: Started sshd@20-172.24.4.123:22-172.24.4.1:34418.service - OpenSSH per-connection server daemon (172.24.4.1:34418). Jun 20 19:53:03.404246 systemd-logind[1537]: Removed session 22. Jun 20 19:53:04.906619 sshd[6434]: Accepted publickey for core from 172.24.4.1 port 34418 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:53:04.909046 sshd-session[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:53:04.920939 systemd-logind[1537]: New session 23 of user core. Jun 20 19:53:04.928417 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:53:06.041759 sshd[6436]: Connection closed by 172.24.4.1 port 34418 Jun 20 19:53:06.045643 sshd-session[6434]: pam_unix(sshd:session): session closed for user core Jun 20 19:53:06.064469 systemd[1]: sshd@20-172.24.4.123:22-172.24.4.1:34418.service: Deactivated successfully. Jun 20 19:53:06.070638 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:53:06.074469 systemd-logind[1537]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:53:06.083602 systemd[1]: Started sshd@21-172.24.4.123:22-172.24.4.1:55242.service - OpenSSH per-connection server daemon (172.24.4.1:55242). Jun 20 19:53:06.087672 systemd-logind[1537]: Removed session 23. Jun 20 19:53:07.321539 sshd[6446]: Accepted publickey for core from 172.24.4.1 port 55242 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:53:07.339003 sshd-session[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:53:07.348128 systemd-logind[1537]: New session 24 of user core. Jun 20 19:53:07.350353 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:53:10.479248 sshd[6448]: Connection closed by 172.24.4.1 port 55242 Jun 20 19:53:10.481183 sshd-session[6446]: pam_unix(sshd:session): session closed for user core Jun 20 19:53:10.491230 systemd[1]: sshd@21-172.24.4.123:22-172.24.4.1:55242.service: Deactivated successfully. Jun 20 19:53:10.496131 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:53:10.499381 systemd-logind[1537]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:53:10.502100 systemd-logind[1537]: Removed session 24. Jun 20 19:53:24.697959 systemd[1]: cri-containerd-a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5.scope: Deactivated successfully. Jun 20 19:53:24.698589 systemd[1]: cri-containerd-a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5.scope: Consumed 4.003s CPU time, 23.1M memory peak, 148K read from disk. Jun 20 19:53:37.931256 containerd[1551]: time="2025-06-20T19:53:24.757810833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475\" id:\"cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475\" pid:2636 exit_status:1 exited_at:{seconds:1750449204 nanos:755251246}" Jun 20 19:53:37.931256 containerd[1551]: time="2025-06-20T19:53:24.758950263Z" level=info msg="received exit event container_id:\"cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475\" id:\"cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475\" pid:2636 exit_status:1 exited_at:{seconds:1750449204 nanos:755251246}" Jun 20 19:53:37.931256 containerd[1551]: time="2025-06-20T19:53:24.766632411Z" level=info msg="received exit event container_id:\"70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e\" id:\"70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e\" pid:3138 exit_status:1 exited_at:{seconds:1750449204 nanos:761802620}" Jun 20 19:53:37.931256 containerd[1551]: time="2025-06-20T19:53:24.779017121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e\" id:\"70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e\" pid:3138 exit_status:1 exited_at:{seconds:1750449204 nanos:761802620}" Jun 20 19:53:37.931256 containerd[1551]: time="2025-06-20T19:53:24.779072075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5\" id:\"a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5\" pid:2653 exit_status:1 exited_at:{seconds:1750449204 nanos:777703153}" Jun 20 19:53:37.931256 containerd[1551]: time="2025-06-20T19:53:24.780456003Z" level=info msg="received exit event container_id:\"a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5\" id:\"a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5\" pid:2653 exit_status:1 exited_at:{seconds:1750449204 nanos:777703153}" Jun 20 19:53:37.931256 containerd[1551]: time="2025-06-20T19:53:37.635056034Z" level=error msg="Failed to get usage for snapshot \"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\"" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/69/fs/etc/service/enabled/monitor-addresses/supervise/pid.new: no such file or directory" Jun 20 19:53:24.711929 systemd[1]: Started sshd@22-172.24.4.123:22-172.24.4.1:35988.service - OpenSSH per-connection server daemon (172.24.4.1:35988). Jun 20 19:53:24.730726 systemd[1]: cri-containerd-cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475.scope: Deactivated successfully. Jun 20 19:53:24.731110 systemd[1]: cri-containerd-cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475.scope: Consumed 6.381s CPU time, 60.5M memory peak, 704K read from disk. Jun 20 19:53:24.738912 systemd[1]: cri-containerd-70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e.scope: Deactivated successfully. Jun 20 19:53:24.739376 systemd[1]: cri-containerd-70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e.scope: Consumed 24.378s CPU time, 105.6M memory peak, 912K read from disk. Jun 20 19:53:24.849135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475-rootfs.mount: Deactivated successfully. Jun 20 19:53:30.022865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5-rootfs.mount: Deactivated successfully. Jun 20 19:53:32.501883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e-rootfs.mount: Deactivated successfully. Jun 20 19:53:37.951097 sshd[6461]: Accepted publickey for core from 172.24.4.1 port 35988 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:53:37.954839 sshd-session[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:53:37.974281 systemd-logind[1537]: New session 25 of user core. Jun 20 19:53:37.977502 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:53:38.063600 kubelet[2815]: E0620 19:53:38.063508 2815 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="28.915s" Jun 20 19:53:38.063600 kubelet[2815]: E0620 19:53:38.063609 2815 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Jun 20 19:53:38.066538 kubelet[2815]: I0620 19:53:38.065242 2815 setters.go:618] "Node became not ready" node="ci-4344-1-0-0-4524070979.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:53:38Z","lastTransitionTime":"2025-06-20T19:53:38Z","reason":"KubeletNotReady","message":"container runtime is down"} Jun 20 19:53:38.278124 containerd[1551]: time="2025-06-20T19:53:38.277247040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"d399ce6028389247efb48445a6b953792b672fdea30975d259577bb00bf2cab8\" pid:6528 exit_status:1 exited_at:{seconds:1750449218 nanos:271162616}" Jun 20 19:53:38.361830 containerd[1551]: time="2025-06-20T19:53:38.361676705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"a6a9d2dcc93ca50010c7c676c25110ce67a649bfa769282b8d82077acb0adfa9\" pid:6592 exit_status:1 exited_at:{seconds:1750449218 nanos:361389175}" Jun 20 19:53:38.488545 containerd[1551]: time="2025-06-20T19:53:38.487862546Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"5d00be286305ad57217055a2ef89378039252a992f225320edbe447dbf42f4e0\" pid:6557 exited_at:{seconds:1750449218 nanos:486418303}" Jun 20 19:53:38.532498 containerd[1551]: time="2025-06-20T19:53:38.532224699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" id:\"79928fa86fbb4562a2525c215493cbcc945a27927e4358133a159478ec5e40fc\" pid:6579 exited_at:{seconds:1750449218 nanos:529924387}" Jun 20 19:53:39.564600 sshd[6505]: Connection closed by 172.24.4.1 port 35988 Jun 20 19:53:39.565858 sshd-session[6461]: pam_unix(sshd:session): session closed for user core Jun 20 19:53:39.575456 systemd-logind[1537]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:53:39.580109 systemd[1]: sshd@22-172.24.4.123:22-172.24.4.1:35988.service: Deactivated successfully. Jun 20 19:53:39.586144 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:53:39.592723 systemd-logind[1537]: Removed session 25. Jun 20 19:53:40.039925 containerd[1551]: time="2025-06-20T19:53:40.038280798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"fe29c47d8aae86bb0df82787eb4ba78fbfdbc862042fcff633796ad49fe5f900\" pid:6633 exited_at:{seconds:1750449220 nanos:33387592}" Jun 20 19:53:40.204763 kubelet[2815]: I0620 19:53:40.204564 2815 scope.go:117] "RemoveContainer" containerID="a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5" Jun 20 19:53:40.205593 kubelet[2815]: I0620 19:53:40.205315 2815 scope.go:117] "RemoveContainer" containerID="70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e" Jun 20 19:53:40.255254 kubelet[2815]: I0620 19:53:40.254437 2815 scope.go:117] "RemoveContainer" containerID="cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475" Jun 20 19:53:40.535663 containerd[1551]: time="2025-06-20T19:53:40.535499679Z" level=info msg="CreateContainer within sandbox \"aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 20 19:53:40.538395 containerd[1551]: time="2025-06-20T19:53:40.536265108Z" level=info msg="CreateContainer within sandbox \"73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 20 19:53:40.880152 containerd[1551]: time="2025-06-20T19:53:40.880013114Z" level=info msg="CreateContainer within sandbox \"21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 20 19:53:43.662757 containerd[1551]: time="2025-06-20T19:53:43.662401366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" id:\"7ac1c58f38cec342518658d4dad6a6a56a7d49026a129624e4f08b82ed90748e\" pid:6657 exited_at:{seconds:1750449223 nanos:660309524}" Jun 20 19:53:44.239509 containerd[1551]: time="2025-06-20T19:53:44.239359451Z" level=info msg="Container 2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:53:44.274451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195128978.mount: Deactivated successfully. Jun 20 19:53:44.411871 systemd[1]: Started sshd@23-172.24.4.123:22-172.24.4.1:34306.service - OpenSSH per-connection server daemon (172.24.4.1:34306). Jun 20 19:53:45.476837 containerd[1551]: time="2025-06-20T19:53:45.476688992Z" level=info msg="Container a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:53:46.745325 containerd[1551]: time="2025-06-20T19:53:46.744839971Z" level=info msg="CreateContainer within sandbox \"aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287\"" Jun 20 19:53:46.769030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168768396.mount: Deactivated successfully. Jun 20 19:53:46.783919 containerd[1551]: time="2025-06-20T19:53:46.782656482Z" level=info msg="Container e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:53:46.794733 containerd[1551]: time="2025-06-20T19:53:46.790747380Z" level=info msg="StartContainer for \"2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287\"" Jun 20 19:53:46.795745 containerd[1551]: time="2025-06-20T19:53:46.795705790Z" level=info msg="connecting to shim 2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287" address="unix:///run/containerd/s/8d170f4a54531fc0fcffe4942d5a7bdbff5c1b2584a97fa97cd7568a511883ba" protocol=ttrpc version=3 Jun 20 19:53:46.829428 systemd[1]: Started cri-containerd-2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287.scope - libcontainer container 2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287. Jun 20 19:53:47.142964 containerd[1551]: time="2025-06-20T19:53:47.142840277Z" level=info msg="StartContainer for \"2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287\" returns successfully" Jun 20 19:53:47.408232 containerd[1551]: time="2025-06-20T19:53:47.407610345Z" level=info msg="CreateContainer within sandbox \"73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9\"" Jun 20 19:53:47.414204 containerd[1551]: time="2025-06-20T19:53:47.413404036Z" level=info msg="StartContainer for \"a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9\"" Jun 20 19:53:47.419473 containerd[1551]: time="2025-06-20T19:53:47.419277536Z" level=info msg="connecting to shim a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9" address="unix:///run/containerd/s/c8265e16fe113449f820f238250199cf923dac65d531d83c42add05f31d2a4d6" protocol=ttrpc version=3 Jun 20 19:53:47.425778 containerd[1551]: time="2025-06-20T19:53:47.425736608Z" level=info msg="CreateContainer within sandbox \"21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3\"" Jun 20 19:53:47.483296 containerd[1551]: time="2025-06-20T19:53:47.483245768Z" level=info msg="StartContainer for \"e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3\"" Jun 20 19:53:47.485956 containerd[1551]: time="2025-06-20T19:53:47.485851336Z" level=info msg="connecting to shim e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3" address="unix:///run/containerd/s/422fa2b4b6dcaa4fc0b1e4021a71b74504aa2694ec8da4e6566b1eab89e29a3a" protocol=ttrpc version=3 Jun 20 19:53:47.495800 systemd[1]: Started cri-containerd-a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9.scope - libcontainer container a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9. Jun 20 19:53:47.531507 systemd[1]: Started cri-containerd-e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3.scope - libcontainer container e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3. Jun 20 19:53:47.662257 containerd[1551]: time="2025-06-20T19:53:47.661905955Z" level=info msg="StartContainer for \"e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3\" returns successfully" Jun 20 19:53:47.666663 containerd[1551]: time="2025-06-20T19:53:47.666586573Z" level=info msg="StartContainer for \"a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9\" returns successfully" Jun 20 19:53:47.924975 containerd[1551]: time="2025-06-20T19:53:47.924796751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"7c92a2139b7089758704532f3cd13c7f66a789679ab125b656a13d884f6669f3\" pid:6776 exited_at:{seconds:1750449227 nanos:922923350}" Jun 20 19:53:48.051671 sshd[6671]: Accepted publickey for core from 172.24.4.1 port 34306 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:53:48.054796 sshd-session[6671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:53:48.067765 systemd-logind[1537]: New session 26 of user core. Jun 20 19:53:48.072306 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:53:48.824781 sshd[6789]: Connection closed by 172.24.4.1 port 34306 Jun 20 19:53:48.825859 sshd-session[6671]: pam_unix(sshd:session): session closed for user core Jun 20 19:53:48.830669 systemd-logind[1537]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:53:48.831493 systemd[1]: sshd@23-172.24.4.123:22-172.24.4.1:34306.service: Deactivated successfully. Jun 20 19:53:48.837348 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:53:48.841945 systemd-logind[1537]: Removed session 26. Jun 20 19:53:50.099196 containerd[1551]: time="2025-06-20T19:53:50.098954552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"99f11aa08a7c746f95c7fc8ab6f3cc07ba011d8e2e054c0262ca8fb9a902d888\" pid:6816 exited_at:{seconds:1750449230 nanos:98578105}" Jun 20 19:53:53.854481 systemd[1]: Started sshd@24-172.24.4.123:22-172.24.4.1:39752.service - OpenSSH per-connection server daemon (172.24.4.1:39752). Jun 20 19:53:54.997725 sshd[6828]: Accepted publickey for core from 172.24.4.1 port 39752 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:53:55.002358 sshd-session[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:53:55.016324 systemd-logind[1537]: New session 27 of user core. Jun 20 19:53:55.029559 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 19:53:56.264248 sshd[6836]: Connection closed by 172.24.4.1 port 39752 Jun 20 19:53:56.265017 sshd-session[6828]: pam_unix(sshd:session): session closed for user core Jun 20 19:53:56.271908 systemd[1]: sshd@24-172.24.4.123:22-172.24.4.1:39752.service: Deactivated successfully. Jun 20 19:53:56.276843 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 19:53:56.280675 systemd-logind[1537]: Session 27 logged out. Waiting for processes to exit. Jun 20 19:53:56.284081 systemd-logind[1537]: Removed session 27. Jun 20 19:54:02.133654 systemd[1]: Started sshd@25-172.24.4.123:22-172.24.4.1:39764.service - OpenSSH per-connection server daemon (172.24.4.1:39764). Jun 20 19:54:07.655438 sshd[6850]: Accepted publickey for core from 172.24.4.1 port 39764 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:54:07.665007 sshd-session[6850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:54:07.681657 systemd-logind[1537]: New session 28 of user core. Jun 20 19:54:07.694681 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 19:54:09.528241 sshd[6852]: Connection closed by 172.24.4.1 port 39764 Jun 20 19:54:09.542097 systemd[1]: sshd@25-172.24.4.123:22-172.24.4.1:39764.service: Deactivated successfully. Jun 20 19:54:09.530480 sshd-session[6850]: pam_unix(sshd:session): session closed for user core Jun 20 19:54:09.552011 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 19:54:09.556642 systemd-logind[1537]: Session 28 logged out. Waiting for processes to exit. Jun 20 19:54:09.561587 systemd-logind[1537]: Removed session 28. Jun 20 19:54:10.124633 containerd[1551]: time="2025-06-20T19:54:10.124544229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"548e09737d98b2d13e0fa3c42aea0d79d132367b491d86eeae10d4775aab103d\" pid:6876 exited_at:{seconds:1750449250 nanos:123747620}" Jun 20 19:54:12.865877 containerd[1551]: time="2025-06-20T19:54:12.865806414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" id:\"6561a49364d49a4f435739881190e9ff6db1cec8759457c5aa4227b225ce98c1\" pid:6906 exited_at:{seconds:1750449252 nanos:865139810}" Jun 20 19:54:19.451925 kubelet[2815]: E0620 19:54:19.451765 2815 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jun 20 19:54:23.358119 containerd[1551]: time="2025-06-20T19:54:19.604944677Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"3fc7269cbf09dfdbb8ef624604499586d722fbfe3c596f3d27244d4e2e0f1cc9\" pid:6945 exited_at:{seconds:1750449259 nanos:604348166}" Jun 20 19:54:23.358119 containerd[1551]: time="2025-06-20T19:54:19.801835418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"c208122edc39e76e21276cc271bae49b0c7b1d9aada09bbaea0816e027205c29\" pid:6967 exited_at:{seconds:1750449259 nanos:800712476}" Jun 20 19:54:23.389704 systemd[1]: Started sshd@26-172.24.4.123:22-172.24.4.1:36002.service - OpenSSH per-connection server daemon (172.24.4.1:36002). Jun 20 19:54:23.426267 systemd[1]: cri-containerd-2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287.scope: Deactivated successfully. Jun 20 19:54:23.426704 systemd[1]: cri-containerd-2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287.scope: Consumed 1.045s CPU time, 63.4M memory peak, 1.1M read from disk. Jun 20 19:54:23.469042 containerd[1551]: time="2025-06-20T19:54:23.468964179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287\" id:\"2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287\" pid:6685 exit_status:1 exited_at:{seconds:1750449263 nanos:445021366}" Jun 20 19:54:23.469931 containerd[1551]: time="2025-06-20T19:54:23.469902965Z" level=info msg="received exit event container_id:\"2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287\" id:\"2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287\" pid:6685 exit_status:1 exited_at:{seconds:1750449263 nanos:445021366}" Jun 20 19:54:23.529354 kubelet[2815]: E0620 19:54:23.529297 2815 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4344-1-0-0-4524070979.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jun 20 19:54:23.548698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287-rootfs.mount: Deactivated successfully. Jun 20 19:54:23.551494 systemd[1]: cri-containerd-e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3.scope: Deactivated successfully. Jun 20 19:54:23.551833 systemd[1]: cri-containerd-e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3.scope: Consumed 3.146s CPU time, 20.5M memory peak, 1.9M read from disk. Jun 20 19:54:24.032113 systemd[1]: cri-containerd-a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9.scope: Deactivated successfully. Jun 20 19:54:24.032613 systemd[1]: cri-containerd-a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9.scope: Consumed 2.836s CPU time, 52.1M memory peak, 4.1M read from disk. Jun 20 19:54:26.417101 containerd[1551]: time="2025-06-20T19:54:26.416431810Z" level=info msg="received exit event container_id:\"e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3\" id:\"e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3\" pid:6735 exit_status:1 exited_at:{seconds:1750449263 nanos:557419178}" Jun 20 19:54:26.419186 containerd[1551]: time="2025-06-20T19:54:26.419126540Z" level=info msg="received exit event container_id:\"a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9\" id:\"a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9\" pid:6717 exit_status:1 exited_at:{seconds:1750449264 nanos:38357547}" Jun 20 19:54:26.420067 containerd[1551]: time="2025-06-20T19:54:26.419920003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3\" id:\"e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3\" pid:6735 exit_status:1 exited_at:{seconds:1750449263 nanos:557419178}" Jun 20 19:54:26.420067 containerd[1551]: time="2025-06-20T19:54:26.419998500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9\" id:\"a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9\" pid:6717 exit_status:1 exited_at:{seconds:1750449264 nanos:38357547}" Jun 20 19:54:26.531927 kubelet[2815]: E0620 19:54:26.432056 2815 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.284s" Jun 20 19:54:26.494642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9-rootfs.mount: Deactivated successfully. Jun 20 19:54:26.530271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3-rootfs.mount: Deactivated successfully. Jun 20 19:54:27.598337 kubelet[2815]: I0620 19:54:27.597704 2815 scope.go:117] "RemoveContainer" containerID="a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5" Jun 20 19:54:27.601601 kubelet[2815]: I0620 19:54:27.601417 2815 scope.go:117] "RemoveContainer" containerID="e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3" Jun 20 19:54:27.605014 kubelet[2815]: E0620 19:54:27.604135 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4344-1-0-0-4524070979.novalocal_kube-system(e3ebf22a1115171285fb45d1f95992d4)\"" pod="kube-system/kube-scheduler-ci-4344-1-0-0-4524070979.novalocal" podUID="e3ebf22a1115171285fb45d1f95992d4" Jun 20 19:54:27.614114 containerd[1551]: time="2025-06-20T19:54:27.614035097Z" level=info msg="RemoveContainer for \"a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5\"" Jun 20 19:54:27.617894 kubelet[2815]: I0620 19:54:27.617415 2815 scope.go:117] "RemoveContainer" containerID="2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287" Jun 20 19:54:27.617894 kubelet[2815]: E0620 19:54:27.617750 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-68f7c7984d-hn5z4_tigera-operator(6dceac94-5d8a-4a17-880a-b168f5c68e50)\"" pod="tigera-operator/tigera-operator-68f7c7984d-hn5z4" podUID="6dceac94-5d8a-4a17-880a-b168f5c68e50" Jun 20 19:54:27.626517 kubelet[2815]: I0620 19:54:27.626481 2815 scope.go:117] "RemoveContainer" containerID="a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9" Jun 20 19:54:27.626999 kubelet[2815]: E0620 19:54:27.626954 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal_kube-system(9b7f3a0768780450411a7965d8d4587b)\"" pod="kube-system/kube-controller-manager-ci-4344-1-0-0-4524070979.novalocal" podUID="9b7f3a0768780450411a7965d8d4587b" Jun 20 19:54:27.694339 containerd[1551]: time="2025-06-20T19:54:27.694257819Z" level=info msg="RemoveContainer for \"a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5\" returns successfully" Jun 20 19:54:27.695492 kubelet[2815]: I0620 19:54:27.695391 2815 scope.go:117] "RemoveContainer" containerID="70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e" Jun 20 19:54:27.701702 containerd[1551]: time="2025-06-20T19:54:27.701585675Z" level=info msg="RemoveContainer for \"70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e\"" Jun 20 19:54:27.752518 containerd[1551]: time="2025-06-20T19:54:27.752432299Z" level=info msg="RemoveContainer for \"70d87ae863ed8344bbe613be81e7c0b0f01b79f52354434c5b1b1fd03568527e\" returns successfully" Jun 20 19:54:27.753788 kubelet[2815]: I0620 19:54:27.753614 2815 scope.go:117] "RemoveContainer" containerID="cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475" Jun 20 19:54:27.764338 containerd[1551]: time="2025-06-20T19:54:27.763460517Z" level=info msg="RemoveContainer for \"cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475\"" Jun 20 19:54:27.853419 containerd[1551]: time="2025-06-20T19:54:27.852247511Z" level=info msg="RemoveContainer for \"cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475\" returns successfully" Jun 20 19:54:28.257950 sshd[6985]: Accepted publickey for core from 172.24.4.1 port 36002 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:54:28.261985 sshd-session[6985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:54:28.286386 systemd-logind[1537]: New session 29 of user core. Jun 20 19:54:28.308215 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 20 19:54:29.317228 sshd[7031]: Connection closed by 172.24.4.1 port 36002 Jun 20 19:54:29.317812 sshd-session[6985]: pam_unix(sshd:session): session closed for user core Jun 20 19:54:29.333278 systemd[1]: sshd@26-172.24.4.123:22-172.24.4.1:36002.service: Deactivated successfully. Jun 20 19:54:29.347651 systemd[1]: session-29.scope: Deactivated successfully. Jun 20 19:54:29.356524 systemd-logind[1537]: Session 29 logged out. Waiting for processes to exit. Jun 20 19:54:29.381675 systemd-logind[1537]: Removed session 29. Jun 20 19:54:36.448276 kubelet[2815]: I0620 19:54:36.444797 2815 scope.go:117] "RemoveContainer" containerID="e32987cc8d39065a4fb5ae8f601250b8b864464bae3b252a556105da3d90ddb3" Jun 20 19:54:36.448276 kubelet[2815]: I0620 19:54:36.446781 2815 scope.go:117] "RemoveContainer" containerID="a06a536ad1329a526c2ba4ba469b1d5d3e97737025d3e3a9da1dd57c896de2b9" Jun 20 19:54:36.470732 systemd[1]: Started sshd@27-172.24.4.123:22-172.24.4.1:58356.service - OpenSSH per-connection server daemon (172.24.4.1:58356). Jun 20 19:54:38.595418 containerd[1551]: time="2025-06-20T19:54:38.594570904Z" level=info msg="CreateContainer within sandbox \"21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jun 20 19:54:38.599271 containerd[1551]: time="2025-06-20T19:54:38.598719720Z" level=info msg="CreateContainer within sandbox \"73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jun 20 19:54:40.069974 containerd[1551]: time="2025-06-20T19:54:40.069533910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"5c8243b898fd3fbe7d36d91f28e293c0a45833c65044a7157e49e08002fce049\" pid:7061 exited_at:{seconds:1750449280 nanos:65348353}" Jun 20 19:54:41.150750 kubelet[2815]: I0620 19:54:41.150675 2815 scope.go:117] "RemoveContainer" containerID="2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287" Jun 20 19:54:41.266451 containerd[1551]: time="2025-06-20T19:54:41.266269257Z" level=info msg="CreateContainer within sandbox \"aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jun 20 19:54:43.139243 sshd[7045]: Accepted publickey for core from 172.24.4.1 port 58356 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:54:43.142744 sshd-session[7045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:54:43.166415 systemd-logind[1537]: New session 30 of user core. Jun 20 19:54:43.180767 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 20 19:54:44.542394 containerd[1551]: time="2025-06-20T19:54:44.542311295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" id:\"bcf68f91bd65a382cf400e1b0799dd50c1fc7093dd0adeeefba85ffd2a2ff885\" pid:7085 exited_at:{seconds:1750449284 nanos:541451798}" Jun 20 19:54:44.787272 containerd[1551]: time="2025-06-20T19:54:44.786951624Z" level=info msg="Container 523fadbca9145f76af30363126a0a8e369c28ce758813987f8083d2240543e13: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:54:46.431289 sshd[7096]: Connection closed by 172.24.4.1 port 58356 Jun 20 19:54:46.433283 sshd-session[7045]: pam_unix(sshd:session): session closed for user core Jun 20 19:54:46.446583 systemd[1]: sshd@27-172.24.4.123:22-172.24.4.1:58356.service: Deactivated successfully. Jun 20 19:54:46.462328 systemd[1]: session-30.scope: Deactivated successfully. Jun 20 19:54:46.475975 systemd-logind[1537]: Session 30 logged out. Waiting for processes to exit. Jun 20 19:54:46.487434 containerd[1551]: time="2025-06-20T19:54:46.486649652Z" level=info msg="Container a2c4d7438b86a7f9a13b0efb257cf4c9a0ba1ab8057feb24d8fe0f1cf9a8debb: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:54:46.503582 systemd-logind[1537]: Removed session 30. Jun 20 19:54:46.527702 containerd[1551]: time="2025-06-20T19:54:46.527621828Z" level=info msg="Container d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:54:46.541570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176391308.mount: Deactivated successfully. Jun 20 19:54:46.804630 containerd[1551]: time="2025-06-20T19:54:46.804572723Z" level=info msg="CreateContainer within sandbox \"21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"a2c4d7438b86a7f9a13b0efb257cf4c9a0ba1ab8057feb24d8fe0f1cf9a8debb\"" Jun 20 19:54:46.806067 containerd[1551]: time="2025-06-20T19:54:46.806016861Z" level=info msg="StartContainer for \"a2c4d7438b86a7f9a13b0efb257cf4c9a0ba1ab8057feb24d8fe0f1cf9a8debb\"" Jun 20 19:54:46.812540 containerd[1551]: time="2025-06-20T19:54:46.812374074Z" level=info msg="connecting to shim a2c4d7438b86a7f9a13b0efb257cf4c9a0ba1ab8057feb24d8fe0f1cf9a8debb" address="unix:///run/containerd/s/422fa2b4b6dcaa4fc0b1e4021a71b74504aa2694ec8da4e6566b1eab89e29a3a" protocol=ttrpc version=3 Jun 20 19:54:46.850499 systemd[1]: Started cri-containerd-a2c4d7438b86a7f9a13b0efb257cf4c9a0ba1ab8057feb24d8fe0f1cf9a8debb.scope - libcontainer container a2c4d7438b86a7f9a13b0efb257cf4c9a0ba1ab8057feb24d8fe0f1cf9a8debb. Jun 20 19:54:47.100138 containerd[1551]: time="2025-06-20T19:54:47.099899002Z" level=info msg="StartContainer for \"a2c4d7438b86a7f9a13b0efb257cf4c9a0ba1ab8057feb24d8fe0f1cf9a8debb\" returns successfully" Jun 20 19:54:47.102471 containerd[1551]: time="2025-06-20T19:54:47.101499163Z" level=info msg="CreateContainer within sandbox \"73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"523fadbca9145f76af30363126a0a8e369c28ce758813987f8083d2240543e13\"" Jun 20 19:54:47.103742 containerd[1551]: time="2025-06-20T19:54:47.103707899Z" level=info msg="StartContainer for \"523fadbca9145f76af30363126a0a8e369c28ce758813987f8083d2240543e13\"" Jun 20 19:54:47.106350 containerd[1551]: time="2025-06-20T19:54:47.106290470Z" level=info msg="connecting to shim 523fadbca9145f76af30363126a0a8e369c28ce758813987f8083d2240543e13" address="unix:///run/containerd/s/c8265e16fe113449f820f238250199cf923dac65d531d83c42add05f31d2a4d6" protocol=ttrpc version=3 Jun 20 19:54:47.125858 containerd[1551]: time="2025-06-20T19:54:47.125798413Z" level=info msg="CreateContainer within sandbox \"aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a\"" Jun 20 19:54:47.127432 containerd[1551]: time="2025-06-20T19:54:47.127390709Z" level=info msg="StartContainer for \"d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a\"" Jun 20 19:54:47.129200 containerd[1551]: time="2025-06-20T19:54:47.129132758Z" level=info msg="connecting to shim d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a" address="unix:///run/containerd/s/8d170f4a54531fc0fcffe4942d5a7bdbff5c1b2584a97fa97cd7568a511883ba" protocol=ttrpc version=3 Jun 20 19:54:47.154499 systemd[1]: Started cri-containerd-523fadbca9145f76af30363126a0a8e369c28ce758813987f8083d2240543e13.scope - libcontainer container 523fadbca9145f76af30363126a0a8e369c28ce758813987f8083d2240543e13. Jun 20 19:54:47.191004 systemd[1]: Started cri-containerd-d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a.scope - libcontainer container d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a. Jun 20 19:54:47.351998 containerd[1551]: time="2025-06-20T19:54:47.351082772Z" level=info msg="StartContainer for \"523fadbca9145f76af30363126a0a8e369c28ce758813987f8083d2240543e13\" returns successfully" Jun 20 19:54:47.526253 containerd[1551]: time="2025-06-20T19:54:47.526200150Z" level=info msg="StartContainer for \"d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a\" returns successfully" Jun 20 19:54:48.434505 containerd[1551]: time="2025-06-20T19:54:48.434447817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"8c6ba947662ed7333971b69f72f0e7928db323ad167a5b85775c9ddcde41bc59\" pid:7214 exited_at:{seconds:1750449288 nanos:433558553}" Jun 20 19:54:49.783622 containerd[1551]: time="2025-06-20T19:54:49.783536581Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"d608aaefb9f1aa93fb98826bf5cfa9f6f91962b562f4a878ec89f4cc8f3bb782\" pid:7241 exited_at:{seconds:1750449289 nanos:782656415}" Jun 20 19:54:51.651120 systemd[1]: Started sshd@28-172.24.4.123:22-172.24.4.1:44202.service - OpenSSH per-connection server daemon (172.24.4.1:44202). Jun 20 19:55:01.157453 sshd[7251]: Accepted publickey for core from 172.24.4.1 port 44202 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:55:01.399692 kubelet[2815]: E0620 19:55:01.399436 2815 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jun 20 19:55:01.418013 systemd[1]: cri-containerd-d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a.scope: Deactivated successfully. Jun 20 19:55:01.437602 sshd-session[7251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:01.459638 containerd[1551]: time="2025-06-20T19:55:01.459396904Z" level=info msg="received exit event container_id:\"d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a\" id:\"d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a\" pid:7171 exit_status:1 exited_at:{seconds:1750449301 nanos:421265293}" Jun 20 19:55:01.461954 containerd[1551]: time="2025-06-20T19:55:01.461814626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a\" id:\"d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a\" pid:7171 exit_status:1 exited_at:{seconds:1750449301 nanos:421265293}" Jun 20 19:55:01.465794 systemd-logind[1537]: New session 31 of user core. Jun 20 19:55:01.471972 systemd[1]: Started session-31.scope - Session 31 of User core. Jun 20 19:55:01.481654 kubelet[2815]: E0620 19:55:01.481576 2815 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.334s" Jun 20 19:55:01.547455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a-rootfs.mount: Deactivated successfully. Jun 20 19:55:02.197552 kubelet[2815]: E0620 19:55:02.197327 2815 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4344-1-0-0-4524070979.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jun 20 19:55:02.530138 sshd[7259]: Connection closed by 172.24.4.1 port 44202 Jun 20 19:55:02.531593 sshd-session[7251]: pam_unix(sshd:session): session closed for user core Jun 20 19:55:02.542400 systemd-logind[1537]: Session 31 logged out. Waiting for processes to exit. Jun 20 19:55:02.543857 systemd[1]: sshd@28-172.24.4.123:22-172.24.4.1:44202.service: Deactivated successfully. Jun 20 19:55:02.554541 systemd[1]: session-31.scope: Deactivated successfully. Jun 20 19:55:02.564274 systemd-logind[1537]: Removed session 31. Jun 20 19:55:03.519204 kubelet[2815]: I0620 19:55:03.515992 2815 scope.go:117] "RemoveContainer" containerID="2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287" Jun 20 19:55:03.519204 kubelet[2815]: I0620 19:55:03.516546 2815 scope.go:117] "RemoveContainer" containerID="d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a" Jun 20 19:55:03.519204 kubelet[2815]: E0620 19:55:03.516859 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-68f7c7984d-hn5z4_tigera-operator(6dceac94-5d8a-4a17-880a-b168f5c68e50)\"" pod="tigera-operator/tigera-operator-68f7c7984d-hn5z4" podUID="6dceac94-5d8a-4a17-880a-b168f5c68e50" Jun 20 19:55:03.524533 containerd[1551]: time="2025-06-20T19:55:03.524453058Z" level=info msg="RemoveContainer for \"2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287\"" Jun 20 19:55:03.735687 containerd[1551]: time="2025-06-20T19:55:03.735578518Z" level=info msg="RemoveContainer for \"2ef6c6fb74beb914cf0bbb7be54c625b3588bc89f28d7b5cb2c0d3ed65319287\" returns successfully" Jun 20 19:55:07.471531 systemd[1]: Started sshd@29-172.24.4.123:22-172.24.4.1:49466.service - OpenSSH per-connection server daemon (172.24.4.1:49466). Jun 20 19:55:08.825053 sshd[7282]: Accepted publickey for core from 172.24.4.1 port 49466 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:55:08.828280 sshd-session[7282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:08.865276 systemd-logind[1537]: New session 32 of user core. Jun 20 19:55:08.882590 systemd[1]: Started session-32.scope - Session 32 of User core. Jun 20 19:55:09.764211 sshd[7289]: Connection closed by 172.24.4.1 port 49466 Jun 20 19:55:09.766730 sshd-session[7282]: pam_unix(sshd:session): session closed for user core Jun 20 19:55:09.783607 systemd[1]: sshd@29-172.24.4.123:22-172.24.4.1:49466.service: Deactivated successfully. Jun 20 19:55:09.799883 systemd[1]: session-32.scope: Deactivated successfully. Jun 20 19:55:09.804783 systemd-logind[1537]: Session 32 logged out. Waiting for processes to exit. Jun 20 19:55:09.810044 systemd-logind[1537]: Removed session 32. Jun 20 19:55:10.049595 containerd[1551]: time="2025-06-20T19:55:10.049420535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f30c966942d87a07abd0d0d545c7ceac4f9e32921d73c2890249fe31be6c2f26\" id:\"42903f5814cf48eef06684bcd37dcad2a26375ac0c5ded96a6051ecad26c3c3c\" pid:7313 exited_at:{seconds:1750449310 nanos:48158038}" Jun 20 19:55:11.116442 containerd[1551]: time="2025-06-20T19:55:11.080720833Z" level=warning msg="container event discarded" container=21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9 type=CONTAINER_CREATED_EVENT Jun 20 19:55:11.163664 containerd[1551]: time="2025-06-20T19:55:11.163500045Z" level=warning msg="container event discarded" container=21621724fc2069ae5e029f365714f361ede7581a17c10732045667c64a1d14f9 type=CONTAINER_STARTED_EVENT Jun 20 19:55:11.163664 containerd[1551]: time="2025-06-20T19:55:11.163645959Z" level=warning msg="container event discarded" container=73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc type=CONTAINER_CREATED_EVENT Jun 20 19:55:11.163664 containerd[1551]: time="2025-06-20T19:55:11.163671728Z" level=warning msg="container event discarded" container=73c34d8aea8491e65eb994ed744cafc9b329d85e2bf69e2b30e3f00b866a0cfc type=CONTAINER_STARTED_EVENT Jun 20 19:55:11.268894 containerd[1551]: time="2025-06-20T19:55:11.268579123Z" level=warning msg="container event discarded" container=88b502aa141e4978eea3e1cbe5bc8a95c19fea34b134ba40c16ad2b24e26df67 type=CONTAINER_CREATED_EVENT Jun 20 19:55:11.269302 containerd[1551]: time="2025-06-20T19:55:11.269095305Z" level=warning msg="container event discarded" container=88b502aa141e4978eea3e1cbe5bc8a95c19fea34b134ba40c16ad2b24e26df67 type=CONTAINER_STARTED_EVENT Jun 20 19:55:12.030272 containerd[1551]: time="2025-06-20T19:55:12.029960419Z" level=warning msg="container event discarded" container=a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5 type=CONTAINER_CREATED_EVENT Jun 20 19:55:12.030272 containerd[1551]: time="2025-06-20T19:55:12.030240074Z" level=warning msg="container event discarded" container=cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475 type=CONTAINER_CREATED_EVENT Jun 20 19:55:12.041963 containerd[1551]: time="2025-06-20T19:55:12.041776951Z" level=warning msg="container event discarded" container=8bba20d58697886c1d10e145c2032ccb1622d05807707c4b6f4104609c2e1cd6 type=CONTAINER_CREATED_EVENT Jun 20 19:55:12.197579 containerd[1551]: time="2025-06-20T19:55:12.197409893Z" level=warning msg="container event discarded" container=cd48a8a374bbd0399c187da5eb401a54b65bc59934bfddacc151142daf319475 type=CONTAINER_STARTED_EVENT Jun 20 19:55:12.219573 containerd[1551]: time="2025-06-20T19:55:12.219376002Z" level=warning msg="container event discarded" container=8bba20d58697886c1d10e145c2032ccb1622d05807707c4b6f4104609c2e1cd6 type=CONTAINER_STARTED_EVENT Jun 20 19:55:12.233032 containerd[1551]: time="2025-06-20T19:55:12.232740098Z" level=warning msg="container event discarded" container=a0fa2afdd962ecefc4f8fe3dd792b5d864a9038c147c8d209fcf0ce4ca33c8f5 type=CONTAINER_STARTED_EVENT Jun 20 19:55:12.953641 containerd[1551]: time="2025-06-20T19:55:12.953530777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6817ba866d717a771ef5f1e11120f1de863da648a17d17e8c044657d2ca5315d\" id:\"e29d5f2195bdfcdd0019c0b9f7458e61ec220da3de824b3935a75c769c9e4bf5\" pid:7336 exited_at:{seconds:1750449312 nanos:951241397}" Jun 20 19:55:14.807068 systemd[1]: Started sshd@30-172.24.4.123:22-172.24.4.1:34032.service - OpenSSH per-connection server daemon (172.24.4.1:34032). Jun 20 19:55:16.060540 sshd[7348]: Accepted publickey for core from 172.24.4.1 port 34032 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:55:16.074654 sshd-session[7348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:55:16.093258 systemd-logind[1537]: New session 33 of user core. Jun 20 19:55:16.103569 systemd[1]: Started session-33.scope - Session 33 of User core. Jun 20 19:55:16.885720 sshd[7350]: Connection closed by 172.24.4.1 port 34032 Jun 20 19:55:16.887361 sshd-session[7348]: pam_unix(sshd:session): session closed for user core Jun 20 19:55:16.896755 systemd[1]: sshd@30-172.24.4.123:22-172.24.4.1:34032.service: Deactivated successfully. Jun 20 19:55:16.905998 systemd[1]: session-33.scope: Deactivated successfully. Jun 20 19:55:16.911444 systemd-logind[1537]: Session 33 logged out. Waiting for processes to exit. Jun 20 19:55:16.915403 systemd-logind[1537]: Removed session 33. Jun 20 19:55:17.150870 kubelet[2815]: I0620 19:55:17.149340 2815 scope.go:117] "RemoveContainer" containerID="d8d05e799adc98c6c66df4a1c1a2c778ae6a59f76952c80e8650c2e6bf19da7a" Jun 20 19:55:17.153287 kubelet[2815]: E0620 19:55:17.152704 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-68f7c7984d-hn5z4_tigera-operator(6dceac94-5d8a-4a17-880a-b168f5c68e50)\"" pod="tigera-operator/tigera-operator-68f7c7984d-hn5z4" podUID="6dceac94-5d8a-4a17-880a-b168f5c68e50" Jun 20 19:55:19.629209 containerd[1551]: time="2025-06-20T19:55:19.629083523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"bd0b79125dcb6afb730b62548dd2dc57fd8da32d555e7a8ecce6211c6a7ec5e0\" pid:7376 exited_at:{seconds:1750449319 nanos:627665644}" Jun 20 19:55:19.788075 containerd[1551]: time="2025-06-20T19:55:19.788008718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7851e1dcd9b9175f29d6a276a93f7f526593f5a9dd7d887e4a229ee7ab229734\" id:\"28d7856f94201bd2327bb3e4243b14de21f70a50cb5dc26dca78e336ae4f1269\" pid:7398 exited_at:{seconds:1750449319 nanos:787290296}" Jun 20 19:55:24.668996 containerd[1551]: time="2025-06-20T19:55:24.668680354Z" level=warning msg="container event discarded" container=a91b3a89ca0964ce5203167fcb02cce45fb309f5ccf57602c507a043bce9d472 type=CONTAINER_CREATED_EVENT Jun 20 19:55:24.668996 containerd[1551]: time="2025-06-20T19:55:24.668760905Z" level=warning msg="container event discarded" container=a91b3a89ca0964ce5203167fcb02cce45fb309f5ccf57602c507a043bce9d472 type=CONTAINER_STARTED_EVENT Jun 20 19:55:24.709316 containerd[1551]: time="2025-06-20T19:55:24.709136424Z" level=warning msg="container event discarded" container=98b9b0e92f91ee8290d0d28d026566fc7353eb53bc432bdcfd53c62e83e4d691 type=CONTAINER_CREATED_EVENT Jun 20 19:55:24.792995 containerd[1551]: time="2025-06-20T19:55:24.791815102Z" level=warning msg="container event discarded" container=98b9b0e92f91ee8290d0d28d026566fc7353eb53bc432bdcfd53c62e83e4d691 type=CONTAINER_STARTED_EVENT Jun 20 19:55:24.948735 containerd[1551]: time="2025-06-20T19:55:24.948394471Z" level=warning msg="container event discarded" container=aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc type=CONTAINER_CREATED_EVENT Jun 20 19:55:24.948735 containerd[1551]: time="2025-06-20T19:55:24.948495581Z" level=warning msg="container event discarded" container=aee0cf702fec8adb804e594b4ad5c715568850b6dc775aba04cd305e2e7c34dc type=CONTAINER_STARTED_EVENT