Jun 20 19:41:52.993862 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:41:52.993912 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:41:52.993932 kernel: BIOS-provided physical RAM map: Jun 20 19:41:52.993951 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 20 19:41:52.993966 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 20 19:41:52.993981 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 19:41:52.993998 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jun 20 19:41:52.994014 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jun 20 19:41:52.994029 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:41:52.994044 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 19:41:52.994060 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jun 20 19:41:52.994075 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 19:41:52.994092 kernel: NX (Execute Disable) protection: active Jun 20 19:41:52.994107 kernel: APIC: Static calls initialized Jun 20 19:41:52.994124 kernel: SMBIOS 3.0.0 present. Jun 20 19:41:52.994139 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jun 20 19:41:52.994154 kernel: DMI: Memory slots populated: 1/1 Jun 20 19:41:52.994171 kernel: Hypervisor detected: KVM Jun 20 19:41:52.997953 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:41:52.997963 kernel: kvm-clock: using sched offset of 4650141717 cycles Jun 20 19:41:52.997973 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:41:52.997981 kernel: tsc: Detected 1996.249 MHz processor Jun 20 19:41:52.997990 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:41:52.997999 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:41:52.998008 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jun 20 19:41:52.998017 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 19:41:52.998030 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:41:52.998038 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jun 20 19:41:52.998047 kernel: ACPI: Early table checksum verification disabled Jun 20 19:41:52.998055 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jun 20 19:41:52.998063 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:41:52.998072 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:41:52.998081 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:41:52.998089 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jun 20 19:41:52.998097 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:41:52.998107 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:41:52.998115 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jun 20 19:41:52.998124 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jun 20 19:41:52.998132 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jun 20 19:41:52.998140 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jun 20 19:41:52.998152 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jun 20 19:41:52.998160 kernel: No NUMA configuration found Jun 20 19:41:52.998170 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jun 20 19:41:52.998195 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jun 20 19:41:52.998203 kernel: Zone ranges: Jun 20 19:41:52.998212 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:41:52.998221 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:41:52.998229 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jun 20 19:41:52.998238 kernel: Device empty Jun 20 19:41:52.998246 kernel: Movable zone start for each node Jun 20 19:41:52.998257 kernel: Early memory node ranges Jun 20 19:41:52.998266 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 19:41:52.998274 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jun 20 19:41:52.998283 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jun 20 19:41:52.998292 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jun 20 19:41:52.998301 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:41:52.998309 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 19:41:52.998318 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jun 20 19:41:52.998327 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 19:41:52.998337 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:41:52.998346 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:41:52.998355 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 19:41:52.998363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:41:52.998372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:41:52.998380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:41:52.998389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:41:52.998398 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:41:52.998406 kernel: CPU topo: Max. logical packages: 2 Jun 20 19:41:52.998417 kernel: CPU topo: Max. logical dies: 2 Jun 20 19:41:52.998425 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:41:52.998434 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:41:52.998442 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:41:52.998451 kernel: CPU topo: Num. threads per package: 1 Jun 20 19:41:52.998459 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:41:52.998467 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:41:52.998476 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jun 20 19:41:52.998484 kernel: Booting paravirtualized kernel on KVM Jun 20 19:41:52.998495 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:41:52.998503 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:41:52.998512 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:41:52.998520 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:41:52.998529 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:41:52.998537 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 20 19:41:52.998547 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:41:52.998556 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:41:52.998566 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:41:52.998575 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:41:52.998584 kernel: Fallback order for Node 0: 0 Jun 20 19:41:52.998592 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jun 20 19:41:52.998601 kernel: Policy zone: Normal Jun 20 19:41:52.998609 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:41:52.998618 kernel: software IO TLB: area num 2. Jun 20 19:41:52.998627 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:41:52.998635 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:41:52.998645 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:41:52.998654 kernel: Dynamic Preempt: voluntary Jun 20 19:41:52.998663 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:41:52.998672 kernel: rcu: RCU event tracing is enabled. Jun 20 19:41:52.998681 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:41:52.998690 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:41:52.998699 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:41:52.998707 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:41:52.998716 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:41:52.998726 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:41:52.998734 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:41:52.998743 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:41:52.998752 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:41:52.998761 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 19:41:52.998770 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:41:52.998778 kernel: Console: colour VGA+ 80x25 Jun 20 19:41:52.998787 kernel: printk: legacy console [tty0] enabled Jun 20 19:41:52.998796 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:41:52.998806 kernel: ACPI: Core revision 20240827 Jun 20 19:41:52.998815 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:41:52.998823 kernel: x2apic enabled Jun 20 19:41:52.998832 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:41:52.998840 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:41:52.998850 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 20 19:41:52.998864 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jun 20 19:41:52.998874 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 20 19:41:52.998883 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 20 19:41:52.998893 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:41:52.998902 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:41:52.998911 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:41:52.998922 kernel: Speculative Store Bypass: Vulnerable Jun 20 19:41:52.998931 kernel: x86/fpu: x87 FPU will use FXSAVE Jun 20 19:41:52.998940 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:41:52.998949 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:41:52.998958 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:41:52.998969 kernel: landlock: Up and running. Jun 20 19:41:52.998978 kernel: SELinux: Initializing. Jun 20 19:41:52.998987 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:41:52.998996 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:41:52.999005 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jun 20 19:41:52.999014 kernel: Performance Events: AMD PMU driver. Jun 20 19:41:52.999023 kernel: ... version: 0 Jun 20 19:41:52.999032 kernel: ... bit width: 48 Jun 20 19:41:52.999041 kernel: ... generic registers: 4 Jun 20 19:41:52.999052 kernel: ... value mask: 0000ffffffffffff Jun 20 19:41:52.999061 kernel: ... max period: 00007fffffffffff Jun 20 19:41:52.999070 kernel: ... fixed-purpose events: 0 Jun 20 19:41:52.999079 kernel: ... event mask: 000000000000000f Jun 20 19:41:52.999088 kernel: signal: max sigframe size: 1440 Jun 20 19:41:52.999097 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:41:52.999106 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:41:52.999115 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:41:52.999124 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:41:52.999135 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:41:52.999144 kernel: .... node #0, CPUs: #1 Jun 20 19:41:52.999153 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:41:52.999162 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jun 20 19:41:52.999172 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 227296K reserved, 0K cma-reserved) Jun 20 19:41:52.999198 kernel: devtmpfs: initialized Jun 20 19:41:52.999207 kernel: x86/mm: Memory block size: 128MB Jun 20 19:41:52.999216 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:41:52.999225 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:41:52.999236 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:41:52.999245 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:41:52.999254 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:41:52.999264 kernel: audit: type=2000 audit(1750448509.004:1): state=initialized audit_enabled=0 res=1 Jun 20 19:41:52.999272 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:41:52.999281 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:41:52.999290 kernel: cpuidle: using governor menu Jun 20 19:41:52.999299 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:41:52.999308 kernel: dca service started, version 1.12.1 Jun 20 19:41:52.999319 kernel: PCI: Using configuration type 1 for base access Jun 20 19:41:52.999329 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:41:52.999338 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:41:52.999347 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:41:52.999356 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:41:52.999365 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:41:52.999374 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:41:52.999383 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:41:52.999392 kernel: ACPI: Interpreter enabled Jun 20 19:41:52.999403 kernel: ACPI: PM: (supports S0 S3 S5) Jun 20 19:41:52.999412 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:41:52.999421 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:41:52.999430 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:41:52.999439 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 20 19:41:52.999448 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:41:52.999639 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:41:52.999731 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 20 19:41:52.999822 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 20 19:41:52.999836 kernel: acpiphp: Slot [3] registered Jun 20 19:41:52.999845 kernel: acpiphp: Slot [4] registered Jun 20 19:41:52.999854 kernel: acpiphp: Slot [5] registered Jun 20 19:41:52.999863 kernel: acpiphp: Slot [6] registered Jun 20 19:41:52.999872 kernel: acpiphp: Slot [7] registered Jun 20 19:41:52.999881 kernel: acpiphp: Slot [8] registered Jun 20 19:41:52.999890 kernel: acpiphp: Slot [9] registered Jun 20 19:41:52.999900 kernel: acpiphp: Slot [10] registered Jun 20 19:41:52.999911 kernel: acpiphp: Slot [11] registered Jun 20 19:41:52.999921 kernel: acpiphp: Slot [12] registered Jun 20 19:41:52.999929 kernel: acpiphp: Slot [13] registered Jun 20 19:41:52.999939 kernel: acpiphp: Slot [14] registered Jun 20 19:41:52.999948 kernel: acpiphp: Slot [15] registered Jun 20 19:41:52.999957 kernel: acpiphp: Slot [16] registered Jun 20 19:41:52.999966 kernel: acpiphp: Slot [17] registered Jun 20 19:41:52.999975 kernel: acpiphp: Slot [18] registered Jun 20 19:41:52.999984 kernel: acpiphp: Slot [19] registered Jun 20 19:41:52.999994 kernel: acpiphp: Slot [20] registered Jun 20 19:41:53.000003 kernel: acpiphp: Slot [21] registered Jun 20 19:41:53.000012 kernel: acpiphp: Slot [22] registered Jun 20 19:41:53.000021 kernel: acpiphp: Slot [23] registered Jun 20 19:41:53.000030 kernel: acpiphp: Slot [24] registered Jun 20 19:41:53.000039 kernel: acpiphp: Slot [25] registered Jun 20 19:41:53.000048 kernel: acpiphp: Slot [26] registered Jun 20 19:41:53.000057 kernel: acpiphp: Slot [27] registered Jun 20 19:41:53.000066 kernel: acpiphp: Slot [28] registered Jun 20 19:41:53.000077 kernel: acpiphp: Slot [29] registered Jun 20 19:41:53.000086 kernel: acpiphp: Slot [30] registered Jun 20 19:41:53.000095 kernel: acpiphp: Slot [31] registered Jun 20 19:41:53.000104 kernel: PCI host bridge to bus 0000:00 Jun 20 19:41:53.000218 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:41:53.000301 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:41:53.000378 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:41:53.000453 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:41:53.000533 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jun 20 19:41:53.000608 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:41:53.000712 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:41:53.000813 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:41:53.000912 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jun 20 19:41:53.001001 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jun 20 19:41:53.001091 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 20 19:41:53.001198 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 20 19:41:53.001293 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 20 19:41:53.001379 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 20 19:41:53.001547 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 20 19:41:53.001674 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 20 19:41:53.001772 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 20 19:41:53.001885 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:41:53.001982 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jun 20 19:41:53.002076 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jun 20 19:41:53.002171 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jun 20 19:41:53.004755 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jun 20 19:41:53.004844 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:41:53.004945 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 20 19:41:53.005043 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jun 20 19:41:53.005129 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jun 20 19:41:53.005240 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jun 20 19:41:53.005330 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jun 20 19:41:53.005462 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 20 19:41:53.005621 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jun 20 19:41:53.005721 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jun 20 19:41:53.005807 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jun 20 19:41:53.005901 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jun 20 19:41:53.005988 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jun 20 19:41:53.006075 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jun 20 19:41:53.006167 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 20 19:41:53.007974 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jun 20 19:41:53.008071 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jun 20 19:41:53.008159 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jun 20 19:41:53.008189 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:41:53.008200 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:41:53.008210 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:41:53.008219 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:41:53.008229 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 20 19:41:53.008238 kernel: iommu: Default domain type: Translated Jun 20 19:41:53.008247 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:41:53.008259 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:41:53.008269 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:41:53.008278 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 20 19:41:53.008288 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jun 20 19:41:53.008380 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 20 19:41:53.008467 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 20 19:41:53.008554 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:41:53.008567 kernel: vgaarb: loaded Jun 20 19:41:53.008580 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:41:53.008589 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:41:53.008598 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:41:53.008607 kernel: pnp: PnP ACPI init Jun 20 19:41:53.008699 kernel: pnp 00:03: [dma 2] Jun 20 19:41:53.008714 kernel: pnp: PnP ACPI: found 5 devices Jun 20 19:41:53.008723 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:41:53.008733 kernel: NET: Registered PF_INET protocol family Jun 20 19:41:53.008742 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:41:53.008755 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 19:41:53.008764 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:41:53.008773 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:41:53.008783 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 19:41:53.008792 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 19:41:53.008801 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:41:53.008811 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:41:53.008820 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:41:53.008829 kernel: NET: Registered PF_XDP protocol family Jun 20 19:41:53.008911 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:41:53.008988 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:41:53.009062 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:41:53.009137 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jun 20 19:41:53.009237 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jun 20 19:41:53.009328 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 20 19:41:53.009448 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 19:41:53.009472 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:41:53.009490 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:41:53.009507 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jun 20 19:41:53.009524 kernel: Initialise system trusted keyrings Jun 20 19:41:53.009539 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 19:41:53.009553 kernel: Key type asymmetric registered Jun 20 19:41:53.009567 kernel: Asymmetric key parser 'x509' registered Jun 20 19:41:53.009585 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:41:53.009600 kernel: io scheduler mq-deadline registered Jun 20 19:41:53.009620 kernel: io scheduler kyber registered Jun 20 19:41:53.009636 kernel: io scheduler bfq registered Jun 20 19:41:53.009650 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:41:53.009661 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 20 19:41:53.009670 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 20 19:41:53.009679 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 20 19:41:53.009689 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 20 19:41:53.009698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:41:53.009707 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:41:53.009718 kernel: random: crng init done Jun 20 19:41:53.009727 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:41:53.009736 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:41:53.009745 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:41:53.009754 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:41:53.009846 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 20 19:41:53.009926 kernel: rtc_cmos 00:04: registered as rtc0 Jun 20 19:41:53.010007 kernel: rtc_cmos 00:04: setting system clock to 2025-06-20T19:41:52 UTC (1750448512) Jun 20 19:41:53.010089 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 20 19:41:53.010102 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 19:41:53.010112 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:41:53.010121 kernel: Segment Routing with IPv6 Jun 20 19:41:53.010130 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:41:53.010140 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:41:53.010149 kernel: Key type dns_resolver registered Jun 20 19:41:53.010158 kernel: IPI shorthand broadcast: enabled Jun 20 19:41:53.010167 kernel: sched_clock: Marking stable (3669008424, 181768610)->(3866070705, -15293671) Jun 20 19:41:53.011548 kernel: registered taskstats version 1 Jun 20 19:41:53.011562 kernel: Loading compiled-in X.509 certificates Jun 20 19:41:53.011572 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:41:53.011601 kernel: Demotion targets for Node 0: null Jun 20 19:41:53.011611 kernel: Key type .fscrypt registered Jun 20 19:41:53.011620 kernel: Key type fscrypt-provisioning registered Jun 20 19:41:53.011630 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:41:53.011639 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:41:53.011648 kernel: ima: No architecture policies found Jun 20 19:41:53.011662 kernel: clk: Disabling unused clocks Jun 20 19:41:53.011671 kernel: Warning: unable to open an initial console. Jun 20 19:41:53.011681 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:41:53.011690 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:41:53.011699 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:41:53.011708 kernel: Run /init as init process Jun 20 19:41:53.011717 kernel: with arguments: Jun 20 19:41:53.011727 kernel: /init Jun 20 19:41:53.011736 kernel: with environment: Jun 20 19:41:53.011746 kernel: HOME=/ Jun 20 19:41:53.011756 kernel: TERM=linux Jun 20 19:41:53.011764 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:41:53.011776 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:41:53.011789 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:41:53.011800 systemd[1]: Detected virtualization kvm. Jun 20 19:41:53.011810 systemd[1]: Detected architecture x86-64. Jun 20 19:41:53.011828 systemd[1]: Running in initrd. Jun 20 19:41:53.011840 systemd[1]: No hostname configured, using default hostname. Jun 20 19:41:53.011850 systemd[1]: Hostname set to . Jun 20 19:41:53.011860 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:41:53.011870 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:41:53.011880 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:41:53.011893 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:41:53.011904 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:41:53.011914 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:41:53.011924 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:41:53.011935 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:41:53.011946 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:41:53.011956 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:41:53.011969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:41:53.011979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:41:53.011989 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:41:53.011999 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:41:53.012009 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:41:53.012018 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:41:53.012028 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:41:53.012038 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:41:53.012050 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:41:53.012060 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:41:53.012072 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:41:53.012082 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:41:53.012092 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:41:53.012102 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:41:53.012114 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:41:53.012124 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:41:53.012134 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:41:53.012146 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:41:53.012156 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:41:53.012168 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:41:53.013157 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:41:53.013172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:41:53.013208 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:41:53.013219 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:41:53.013229 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:41:53.013240 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:41:53.013287 systemd-journald[214]: Collecting audit messages is disabled. Jun 20 19:41:53.013315 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:41:53.013327 systemd-journald[214]: Journal started Jun 20 19:41:53.013353 systemd-journald[214]: Runtime Journal (/run/log/journal/01ae3e11fc914efbb16c982aca6b8d5a) is 8M, max 78.5M, 70.5M free. Jun 20 19:41:53.010222 systemd-modules-load[215]: Inserted module 'overlay' Jun 20 19:41:53.046524 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:41:53.046555 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:41:53.046570 kernel: Bridge firewalling registered Jun 20 19:41:53.045963 systemd-modules-load[215]: Inserted module 'br_netfilter' Jun 20 19:41:53.047482 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:41:53.048392 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:41:53.051348 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:41:53.055360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:41:53.059332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:41:53.067617 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:41:53.079102 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:41:53.083337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:41:53.090674 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:41:53.091160 systemd-tmpfiles[232]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:41:53.095286 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:41:53.096674 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:41:53.103318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:41:53.114248 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:41:53.147563 systemd-resolved[253]: Positive Trust Anchors: Jun 20 19:41:53.148272 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:41:53.148315 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:41:53.154869 systemd-resolved[253]: Defaulting to hostname 'linux'. Jun 20 19:41:53.155797 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:41:53.156629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:41:53.189218 kernel: SCSI subsystem initialized Jun 20 19:41:53.200234 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:41:53.213210 kernel: iscsi: registered transport (tcp) Jun 20 19:41:53.237294 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:41:53.237330 kernel: QLogic iSCSI HBA Driver Jun 20 19:41:53.262923 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:41:53.287100 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:41:53.288238 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:41:53.356070 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:41:53.358914 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:41:53.422263 kernel: raid6: sse2x4 gen() 12842 MB/s Jun 20 19:41:53.440245 kernel: raid6: sse2x2 gen() 14627 MB/s Jun 20 19:41:53.458813 kernel: raid6: sse2x1 gen() 9697 MB/s Jun 20 19:41:53.458916 kernel: raid6: using algorithm sse2x2 gen() 14627 MB/s Jun 20 19:41:53.477852 kernel: raid6: .... xor() 9246 MB/s, rmw enabled Jun 20 19:41:53.477978 kernel: raid6: using ssse3x2 recovery algorithm Jun 20 19:41:53.501502 kernel: xor: measuring software checksum speed Jun 20 19:41:53.501615 kernel: prefetch64-sse : 17063 MB/sec Jun 20 19:41:53.503771 kernel: generic_sse : 14259 MB/sec Jun 20 19:41:53.503827 kernel: xor: using function: prefetch64-sse (17063 MB/sec) Jun 20 19:41:53.710252 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:41:53.719485 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:41:53.725794 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:41:53.751861 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jun 20 19:41:53.758425 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:41:53.766141 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:41:53.800771 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jun 20 19:41:53.846994 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:41:53.852514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:41:53.925223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:41:53.927697 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:41:54.019203 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jun 20 19:41:54.046259 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jun 20 19:41:54.051846 kernel: libata version 3.00 loaded. Jun 20 19:41:54.061249 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:41:54.061321 kernel: GPT:17805311 != 20971519 Jun 20 19:41:54.061342 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:41:54.064097 kernel: GPT:17805311 != 20971519 Jun 20 19:41:54.066480 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:41:54.068716 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:41:54.073088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:41:54.073247 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:41:54.077633 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 20 19:41:54.074851 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:41:54.076607 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:41:54.081167 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:41:54.097051 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 20 19:41:54.097073 kernel: scsi host0: ata_piix Jun 20 19:41:54.097238 kernel: scsi host1: ata_piix Jun 20 19:41:54.097347 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jun 20 19:41:54.097360 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jun 20 19:41:54.155125 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 20 19:41:54.184943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:41:54.204083 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:41:54.212848 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 20 19:41:54.213534 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 20 19:41:54.224826 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 20 19:41:54.227287 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:41:54.251571 disk-uuid[558]: Primary Header is updated. Jun 20 19:41:54.251571 disk-uuid[558]: Secondary Entries is updated. Jun 20 19:41:54.251571 disk-uuid[558]: Secondary Header is updated. Jun 20 19:41:54.253769 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:41:54.255549 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:41:54.256087 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:41:54.268763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:41:54.256586 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:41:54.259286 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:41:54.306878 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:41:55.287383 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:41:55.289561 disk-uuid[562]: The operation has completed successfully. Jun 20 19:41:55.367848 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:41:55.368781 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:41:55.441291 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:41:55.463362 sh[583]: Success Jun 20 19:41:55.511848 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:41:55.511945 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:41:55.518397 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:41:55.541222 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jun 20 19:41:55.617779 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:41:55.624335 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:41:55.640514 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:41:55.665102 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:41:55.665208 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (595) Jun 20 19:41:55.673948 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:41:55.674010 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:41:55.674041 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:41:55.692913 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:41:55.695001 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:41:55.697607 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:41:55.700308 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:41:55.705690 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:41:55.753270 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (625) Jun 20 19:41:55.760365 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:41:55.760479 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:41:55.761311 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:41:55.773279 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:41:55.774020 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:41:55.780483 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:41:55.836377 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:41:55.839425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:41:55.882424 systemd-networkd[765]: lo: Link UP Jun 20 19:41:55.882436 systemd-networkd[765]: lo: Gained carrier Jun 20 19:41:55.883486 systemd-networkd[765]: Enumeration completed Jun 20 19:41:55.885575 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:41:55.885579 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:41:55.885942 systemd-networkd[765]: eth0: Link UP Jun 20 19:41:55.885945 systemd-networkd[765]: eth0: Gained carrier Jun 20 19:41:55.885954 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:41:55.889645 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:41:55.890560 systemd[1]: Reached target network.target - Network. Jun 20 19:41:55.897792 systemd-networkd[765]: eth0: DHCPv4 address 172.24.4.229/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 20 19:41:56.012416 ignition[675]: Ignition 2.21.0 Jun 20 19:41:56.013199 ignition[675]: Stage: fetch-offline Jun 20 19:41:56.013240 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:41:56.013249 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:41:56.013326 ignition[675]: parsed url from cmdline: "" Jun 20 19:41:56.013330 ignition[675]: no config URL provided Jun 20 19:41:56.013335 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:41:56.013342 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:41:56.016589 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:41:56.013349 ignition[675]: failed to fetch config: resource requires networking Jun 20 19:41:56.013526 ignition[675]: Ignition finished successfully Jun 20 19:41:56.021971 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:41:56.061448 ignition[777]: Ignition 2.21.0 Jun 20 19:41:56.061477 ignition[777]: Stage: fetch Jun 20 19:41:56.061812 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:41:56.061837 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:41:56.062014 ignition[777]: parsed url from cmdline: "" Jun 20 19:41:56.062023 ignition[777]: no config URL provided Jun 20 19:41:56.062034 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:41:56.062051 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:41:56.062238 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jun 20 19:41:56.062283 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jun 20 19:41:56.062342 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jun 20 19:41:56.331071 ignition[777]: GET result: OK Jun 20 19:41:56.332078 ignition[777]: parsing config with SHA512: f9eda771580e91ddadbd377e3789f3883b5952763ce76de03a94a78ddf667dea01507d8af3724b9ba25f06562ab250517c36082f014254f5c34b02b5baff2be5 Jun 20 19:41:56.341699 unknown[777]: fetched base config from "system" Jun 20 19:41:56.341724 unknown[777]: fetched base config from "system" Jun 20 19:41:56.342808 ignition[777]: fetch: fetch complete Jun 20 19:41:56.341737 unknown[777]: fetched user config from "openstack" Jun 20 19:41:56.342821 ignition[777]: fetch: fetch passed Jun 20 19:41:56.348818 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:41:56.342912 ignition[777]: Ignition finished successfully Jun 20 19:41:56.353054 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:41:56.425375 ignition[784]: Ignition 2.21.0 Jun 20 19:41:56.425430 ignition[784]: Stage: kargs Jun 20 19:41:56.425788 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:41:56.425814 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:41:56.432452 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:41:56.428519 ignition[784]: kargs: kargs passed Jun 20 19:41:56.428626 ignition[784]: Ignition finished successfully Jun 20 19:41:56.439574 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:41:56.474817 ignition[790]: Ignition 2.21.0 Jun 20 19:41:56.474833 ignition[790]: Stage: disks Jun 20 19:41:56.474975 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:41:56.474985 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:41:56.476324 ignition[790]: disks: disks passed Jun 20 19:41:56.478674 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:41:56.476385 ignition[790]: Ignition finished successfully Jun 20 19:41:56.480754 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:41:56.482523 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:41:56.484416 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:41:56.486329 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:41:56.488401 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:41:56.491925 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:41:56.529445 systemd-fsck[799]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jun 20 19:41:56.548311 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:41:56.552657 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:41:56.777260 kernel: EXT4-fs (vda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:41:56.778967 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:41:56.780994 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:41:56.786580 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:41:56.795369 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:41:56.804010 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:41:56.809487 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jun 20 19:41:56.815368 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:41:56.817567 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:41:56.824928 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:41:56.837239 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (807) Jun 20 19:41:56.837296 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:41:56.843944 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:41:56.844005 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:41:56.851506 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:41:56.867759 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:41:56.998160 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:41:56.999519 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:41:57.005618 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:41:57.014858 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:41:57.021619 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:41:57.183835 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:41:57.188381 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:41:57.191430 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:41:57.218853 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:41:57.223338 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:41:57.251531 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:41:57.258931 ignition[925]: INFO : Ignition 2.21.0 Jun 20 19:41:57.261252 ignition[925]: INFO : Stage: mount Jun 20 19:41:57.261252 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:41:57.261252 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:41:57.263110 ignition[925]: INFO : mount: mount passed Jun 20 19:41:57.263110 ignition[925]: INFO : Ignition finished successfully Jun 20 19:41:57.263122 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:41:57.398618 systemd-networkd[765]: eth0: Gained IPv6LL Jun 20 19:41:58.041290 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:00.056263 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:04.071265 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:04.078885 coreos-metadata[809]: Jun 20 19:42:04.078 WARN failed to locate config-drive, using the metadata service API instead Jun 20 19:42:04.120070 coreos-metadata[809]: Jun 20 19:42:04.120 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 20 19:42:04.135577 coreos-metadata[809]: Jun 20 19:42:04.135 INFO Fetch successful Jun 20 19:42:04.136936 coreos-metadata[809]: Jun 20 19:42:04.136 INFO wrote hostname ci-4344-1-0-8-afb8bdccbb.novalocal to /sysroot/etc/hostname Jun 20 19:42:04.140096 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jun 20 19:42:04.140417 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jun 20 19:42:04.147814 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:42:04.176250 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:42:04.211268 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (941) Jun 20 19:42:04.219423 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:42:04.219491 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:42:04.223618 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:42:04.237492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:42:04.286173 ignition[959]: INFO : Ignition 2.21.0 Jun 20 19:42:04.286173 ignition[959]: INFO : Stage: files Jun 20 19:42:04.289013 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:42:04.289013 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:42:04.289013 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:42:04.295092 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:42:04.295092 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:42:04.300307 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:42:04.302167 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:42:04.302167 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:42:04.301074 unknown[959]: wrote ssh authorized keys file for user: core Jun 20 19:42:04.308204 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:42:04.308204 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:42:04.642360 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:42:06.683044 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:42:06.683044 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:42:06.688252 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:42:06.688252 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:42:06.688252 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:42:06.688252 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:42:06.688252 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:42:06.688252 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:42:06.688252 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:42:06.702944 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:42:06.702944 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:42:06.702944 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:42:06.702944 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:42:06.702944 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:42:06.702944 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:42:07.439220 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:42:09.097606 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:42:09.097606 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 19:42:09.102852 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:42:09.108100 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:42:09.108100 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 19:42:09.108100 ignition[959]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:42:09.116026 ignition[959]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:42:09.116026 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:42:09.116026 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:42:09.116026 ignition[959]: INFO : files: files passed Jun 20 19:42:09.116026 ignition[959]: INFO : Ignition finished successfully Jun 20 19:42:09.109865 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:42:09.114306 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:42:09.118278 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:42:09.139069 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:42:09.147246 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:42:09.147246 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:42:09.139190 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:42:09.154932 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:42:09.148007 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:42:09.150386 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:42:09.153116 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:42:09.202951 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:42:09.204477 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:42:09.208544 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:42:09.209944 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:42:09.212618 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:42:09.214396 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:42:09.267675 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:42:09.272873 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:42:09.311560 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:42:09.313290 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:42:09.316558 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:42:09.319596 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:42:09.319984 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:42:09.323669 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:42:09.325796 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:42:09.329505 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:42:09.332596 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:42:09.335875 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:42:09.339596 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:42:09.342637 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:42:09.345519 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:42:09.348718 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:42:09.351621 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:42:09.354572 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:42:09.357338 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:42:09.357754 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:42:09.360735 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:42:09.362825 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:42:09.365102 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:42:09.365423 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:42:09.368268 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:42:09.368553 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:42:09.372499 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:42:09.372818 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:42:09.375966 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:42:09.376386 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:42:09.381637 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:42:09.387532 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:42:09.387833 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:42:09.397314 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:42:09.402355 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:42:09.402717 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:42:09.409408 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:42:09.409718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:42:09.417270 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:42:09.417678 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:42:09.428643 ignition[1013]: INFO : Ignition 2.21.0 Jun 20 19:42:09.428643 ignition[1013]: INFO : Stage: umount Jun 20 19:42:09.430228 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:42:09.430228 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:42:09.430228 ignition[1013]: INFO : umount: umount passed Jun 20 19:42:09.430228 ignition[1013]: INFO : Ignition finished successfully Jun 20 19:42:09.434691 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:42:09.434817 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:42:09.435940 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:42:09.435984 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:42:09.439597 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:42:09.439640 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:42:09.440612 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:42:09.440649 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:42:09.441646 systemd[1]: Stopped target network.target - Network. Jun 20 19:42:09.442635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:42:09.442677 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:42:09.443735 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:42:09.444716 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:42:09.444943 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:42:09.445792 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:42:09.447033 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:42:09.448265 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:42:09.448298 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:42:09.449235 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:42:09.449264 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:42:09.450404 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:42:09.450448 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:42:09.451652 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:42:09.451690 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:42:09.454838 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:42:09.455856 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:42:09.461533 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:42:09.464396 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:42:09.464490 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:42:09.468836 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:42:09.469023 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:42:09.469113 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:42:09.471240 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:42:09.471439 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:42:09.471512 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:42:09.472768 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:42:09.473548 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:42:09.473589 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:42:09.474561 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:42:09.474605 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:42:09.476198 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:42:09.477966 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:42:09.478017 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:42:09.480611 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:42:09.480668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:42:09.481272 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:42:09.481313 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:42:09.482331 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:42:09.482369 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:42:09.483763 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:42:09.485485 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:42:09.485546 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:42:09.493677 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:42:09.495469 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:42:09.496277 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:42:09.496310 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:42:09.497577 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:42:09.497605 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:42:09.498718 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:42:09.498758 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:42:09.500336 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:42:09.500376 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:42:09.501457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:42:09.501502 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:42:09.504275 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:42:09.505419 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:42:09.505470 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:42:09.507261 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:42:09.507302 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:42:09.509255 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:42:09.509295 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:42:09.512226 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:42:09.512277 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:42:09.512318 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:42:09.512588 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:42:09.517250 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:42:09.520890 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:42:09.520977 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:42:09.522447 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:42:09.524202 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:42:09.540385 systemd[1]: Switching root. Jun 20 19:42:09.572279 systemd-journald[214]: Journal stopped Jun 20 19:42:11.244130 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). Jun 20 19:42:11.244216 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:42:11.244245 kernel: SELinux: policy capability open_perms=1 Jun 20 19:42:11.244265 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:42:11.244289 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:42:11.244309 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:42:11.244323 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:42:11.244336 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:42:11.244350 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:42:11.244371 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:42:11.244384 kernel: audit: type=1403 audit(1750448530.229:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:42:11.244399 systemd[1]: Successfully loaded SELinux policy in 78.583ms. Jun 20 19:42:11.244421 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.128ms. Jun 20 19:42:11.244438 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:42:11.244453 systemd[1]: Detected virtualization kvm. Jun 20 19:42:11.244467 systemd[1]: Detected architecture x86-64. Jun 20 19:42:11.244481 systemd[1]: Detected first boot. Jun 20 19:42:11.244495 systemd[1]: Hostname set to . Jun 20 19:42:11.244510 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:42:11.244524 zram_generator::config[1056]: No configuration found. Jun 20 19:42:11.244542 kernel: Guest personality initialized and is inactive Jun 20 19:42:11.244556 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:42:11.244569 kernel: Initialized host personality Jun 20 19:42:11.244582 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:42:11.244596 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:42:11.244612 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:42:11.244626 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:42:11.244640 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:42:11.244655 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:42:11.244672 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:42:11.244688 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:42:11.244706 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:42:11.244720 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:42:11.244735 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:42:11.244750 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:42:11.244765 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:42:11.244779 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:42:11.244795 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:42:11.244811 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:42:11.244827 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:42:11.244842 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:42:11.244857 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:42:11.244872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:42:11.244888 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:42:11.244903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:42:11.244917 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:42:11.244931 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:42:11.244946 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:42:11.244960 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:42:11.244975 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:42:11.244989 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:42:11.245003 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:42:11.245017 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:42:11.245034 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:42:11.245048 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:42:11.245062 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:42:11.245077 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:42:11.245091 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:42:11.245106 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:42:11.245120 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:42:11.245134 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:42:11.245149 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:42:11.245165 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:42:11.247229 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:42:11.247249 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:42:11.247264 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:42:11.247277 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:42:11.247292 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:42:11.247306 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:42:11.247319 systemd[1]: Reached target machines.target - Containers. Jun 20 19:42:11.247336 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:42:11.247350 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:42:11.247364 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:42:11.247379 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:42:11.247392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:42:11.247406 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:42:11.247420 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:42:11.247434 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:42:11.247447 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:42:11.247464 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:42:11.247478 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:42:11.247491 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:42:11.247505 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:42:11.247519 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:42:11.247533 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:42:11.247546 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:42:11.247560 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:42:11.247575 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:42:11.247590 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:42:11.247606 kernel: loop: module loaded Jun 20 19:42:11.247622 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:42:11.247636 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:42:11.247650 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:42:11.247663 systemd[1]: Stopped verity-setup.service. Jun 20 19:42:11.247677 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:42:11.247691 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:42:11.247705 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:42:11.247721 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:42:11.247735 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:42:11.247748 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:42:11.247761 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:42:11.247775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:42:11.247788 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:42:11.247802 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:42:11.247816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:42:11.247829 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:42:11.247845 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:42:11.247880 systemd-journald[1139]: Collecting audit messages is disabled. Jun 20 19:42:11.247911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:42:11.247925 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:42:11.247939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:42:11.247953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:42:11.247966 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:42:11.247981 systemd-journald[1139]: Journal started Jun 20 19:42:11.248010 systemd-journald[1139]: Runtime Journal (/run/log/journal/01ae3e11fc914efbb16c982aca6b8d5a) is 8M, max 78.5M, 70.5M free. Jun 20 19:42:10.892321 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:42:10.911415 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 20 19:42:10.911848 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:42:11.254261 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:42:11.255935 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:42:11.263223 kernel: fuse: init (API version 7.41) Jun 20 19:42:11.271643 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:42:11.279254 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:42:11.279808 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:42:11.279841 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:42:11.281485 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:42:11.284760 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:42:11.285405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:42:11.308022 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:42:11.309523 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:42:11.310153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:42:11.311018 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:42:11.311636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:42:11.313395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:42:11.316336 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:42:11.319802 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:42:11.319977 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:42:11.320739 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:42:11.321515 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:42:11.329541 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:42:11.333307 kernel: ACPI: bus type drm_connector registered Jun 20 19:42:11.335778 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:42:11.343217 kernel: loop0: detected capacity change from 0 to 8 Jun 20 19:42:11.344912 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:42:11.345894 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:42:11.348387 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:42:11.351513 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:42:11.366800 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:42:11.366917 systemd-journald[1139]: Time spent on flushing to /var/log/journal/01ae3e11fc914efbb16c982aca6b8d5a is 49.217ms for 973 entries. Jun 20 19:42:11.366917 systemd-journald[1139]: System Journal (/var/log/journal/01ae3e11fc914efbb16c982aca6b8d5a) is 8M, max 584.8M, 576.8M free. Jun 20 19:42:11.466058 systemd-journald[1139]: Received client request to flush runtime journal. Jun 20 19:42:11.466119 kernel: loop1: detected capacity change from 0 to 146240 Jun 20 19:42:11.373235 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:42:11.377444 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:42:11.389376 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:42:11.396592 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:42:11.405535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:42:11.467928 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:42:11.489534 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:42:11.515213 kernel: loop2: detected capacity change from 0 to 224512 Jun 20 19:42:11.535018 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:42:11.543506 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:42:11.577748 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jun 20 19:42:11.578087 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jun 20 19:42:11.582778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:42:11.589580 kernel: loop3: detected capacity change from 0 to 113872 Jun 20 19:42:11.636219 kernel: loop4: detected capacity change from 0 to 8 Jun 20 19:42:11.641211 kernel: loop5: detected capacity change from 0 to 146240 Jun 20 19:42:11.705228 kernel: loop6: detected capacity change from 0 to 224512 Jun 20 19:42:11.765233 kernel: loop7: detected capacity change from 0 to 113872 Jun 20 19:42:11.823820 (sd-merge)[1219]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jun 20 19:42:11.824811 (sd-merge)[1219]: Merged extensions into '/usr'. Jun 20 19:42:11.830907 systemd[1]: Reload requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:42:11.830921 systemd[1]: Reloading... Jun 20 19:42:11.944198 zram_generator::config[1245]: No configuration found. Jun 20 19:42:12.094837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:42:12.196763 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:42:12.197052 systemd[1]: Reloading finished in 365 ms. Jun 20 19:42:12.218461 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:42:12.225416 systemd[1]: Starting ensure-sysext.service... Jun 20 19:42:12.229301 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:42:12.267042 systemd[1]: Reload requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:42:12.267062 systemd[1]: Reloading... Jun 20 19:42:12.276686 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:42:12.277098 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:42:12.277803 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:42:12.278281 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:42:12.279522 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:42:12.279792 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jun 20 19:42:12.279844 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jun 20 19:42:12.301749 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:42:12.301759 systemd-tmpfiles[1301]: Skipping /boot Jun 20 19:42:12.318739 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:42:12.318861 systemd-tmpfiles[1301]: Skipping /boot Jun 20 19:42:12.350349 zram_generator::config[1329]: No configuration found. Jun 20 19:42:12.357515 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:42:12.471295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:42:12.574038 systemd[1]: Reloading finished in 306 ms. Jun 20 19:42:12.596021 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:42:12.596949 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:42:12.602708 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:42:12.616315 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:42:12.619475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:42:12.621171 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:42:12.626491 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:42:12.638363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:42:12.644459 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:42:12.660136 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:42:12.668069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:42:12.669100 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:42:12.679798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:42:12.685313 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:42:12.697138 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:42:12.698321 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:42:12.698452 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:42:12.698577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:42:12.700844 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:42:12.705797 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Jun 20 19:42:12.709613 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:42:12.709881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:42:12.710142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:42:12.711030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:42:12.714589 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:42:12.715253 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:42:12.722852 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:42:12.724120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:42:12.725343 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:42:12.731127 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:42:12.731616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:42:12.734352 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:42:12.735391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:42:12.735437 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:42:12.735534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:42:12.736240 systemd[1]: Finished ensure-sysext.service. Jun 20 19:42:12.736963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:42:12.737576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:42:12.741563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:42:12.750651 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:42:12.759679 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:42:12.761085 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:42:12.761278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:42:12.762565 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:42:12.762709 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:42:12.765048 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:42:12.765094 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:42:12.773376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:42:12.779903 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:42:12.782526 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:42:12.783623 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:42:12.833975 augenrules[1460]: No rules Jun 20 19:42:12.835721 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:42:12.837240 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:42:12.947821 systemd-resolved[1391]: Positive Trust Anchors: Jun 20 19:42:12.947836 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:42:12.947879 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:42:12.955373 systemd-networkd[1427]: lo: Link UP Jun 20 19:42:12.955380 systemd-networkd[1427]: lo: Gained carrier Jun 20 19:42:12.955724 systemd-resolved[1391]: Using system hostname 'ci-4344-1-0-8-afb8bdccbb.novalocal'. Jun 20 19:42:12.959084 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:42:12.959709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:42:12.960281 systemd-networkd[1427]: Enumeration completed Jun 20 19:42:12.960554 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:42:12.961926 systemd[1]: Reached target network.target - Network. Jun 20 19:42:12.963977 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:42:12.967361 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:42:13.002719 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:42:13.003436 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:42:13.004004 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:42:13.004799 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:42:13.006032 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:42:13.006680 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:42:13.008023 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:42:13.008055 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:42:13.008534 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:42:13.009327 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:42:13.010309 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:42:13.011238 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:42:13.013267 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:42:13.016040 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:42:13.022012 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:42:13.023821 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:42:13.025004 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:42:13.036411 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:42:13.039316 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:42:13.042252 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:42:13.043522 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:42:13.047847 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:42:13.048645 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:42:13.049408 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:42:13.049525 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:42:13.051279 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:42:13.055161 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:42:13.059402 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:42:13.064010 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:42:13.071970 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:42:13.076847 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:42:13.077467 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:42:13.081923 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:13.084361 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:42:13.089274 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:42:13.097394 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:42:13.099513 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:42:13.100244 jq[1485]: false Jun 20 19:42:13.103046 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:42:13.112405 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:42:13.113773 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:42:13.115342 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:42:13.120745 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:42:13.125301 oslogin_cache_refresh[1488]: Refreshing passwd entry cache Jun 20 19:42:13.126515 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Refreshing passwd entry cache Jun 20 19:42:13.126931 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:42:13.127511 oslogin_cache_refresh[1488]: Failure getting users, quitting Jun 20 19:42:13.129543 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Failure getting users, quitting Jun 20 19:42:13.129543 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:42:13.129543 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Refreshing group entry cache Jun 20 19:42:13.129543 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Failure getting groups, quitting Jun 20 19:42:13.129543 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:42:13.127523 oslogin_cache_refresh[1488]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:42:13.127552 oslogin_cache_refresh[1488]: Refreshing group entry cache Jun 20 19:42:13.127979 oslogin_cache_refresh[1488]: Failure getting groups, quitting Jun 20 19:42:13.127986 oslogin_cache_refresh[1488]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:42:13.134410 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:42:13.135281 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:42:13.135446 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:42:13.135665 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:42:13.135816 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:42:13.142446 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:42:13.142633 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:42:13.143527 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:42:13.158457 extend-filesystems[1487]: Found /dev/vda6 Jun 20 19:42:13.164935 extend-filesystems[1487]: Found /dev/vda9 Jun 20 19:42:13.164935 extend-filesystems[1487]: Checking size of /dev/vda9 Jun 20 19:42:13.173661 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:42:13.178220 extend-filesystems[1487]: Resized partition /dev/vda9 Jun 20 19:42:13.181213 jq[1499]: true Jun 20 19:42:13.189444 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:42:13.189648 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:42:13.199335 extend-filesystems[1528]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 19:42:13.206591 update_engine[1498]: I20250620 19:42:13.203038 1498 main.cc:92] Flatcar Update Engine starting Jun 20 19:42:13.215243 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jun 20 19:42:13.216949 tar[1501]: linux-amd64/LICENSE Jun 20 19:42:13.220044 tar[1501]: linux-amd64/helm Jun 20 19:42:13.224206 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jun 20 19:42:13.231691 jq[1526]: true Jun 20 19:42:13.288791 extend-filesystems[1528]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 20 19:42:13.288791 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 19:42:13.288791 extend-filesystems[1528]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jun 20 19:42:13.243735 dbus-daemon[1482]: [system] SELinux support is enabled Jun 20 19:42:13.304844 update_engine[1498]: I20250620 19:42:13.259253 1498 update_check_scheduler.cc:74] Next update check in 9m56s Jun 20 19:42:13.243856 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:42:13.305023 extend-filesystems[1487]: Resized filesystem in /dev/vda9 Jun 20 19:42:13.247914 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:42:13.247937 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:42:13.249495 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:42:13.249519 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:42:13.258283 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:42:13.284779 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:42:13.284784 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:42:13.286689 systemd-networkd[1427]: eth0: Link UP Jun 20 19:42:13.287158 systemd-networkd[1427]: eth0: Gained carrier Jun 20 19:42:13.287206 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:42:13.300227 systemd-networkd[1427]: eth0: DHCPv4 address 172.24.4.229/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 20 19:42:13.301610 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Jun 20 19:42:13.308503 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:42:13.326620 systemd-logind[1497]: New seat seat0. Jun 20 19:42:13.327636 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:42:13.329071 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:42:13.329269 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:42:13.345954 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:42:13.351800 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:42:13.383501 bash[1555]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:42:13.384428 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:42:13.388969 systemd[1]: Starting sshkeys.service... Jun 20 19:42:13.394727 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:42:13.404206 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:42:13.436789 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:42:13.440600 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:42:13.448517 systemd-timesyncd[1414]: Contacted time server 144.202.66.214:123 (0.flatcar.pool.ntp.org). Jun 20 19:42:13.448568 systemd-timesyncd[1414]: Initial clock synchronization to Fri 2025-06-20 19:42:13.533682 UTC. Jun 20 19:42:13.470468 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:13.486221 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:42:13.508817 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:42:13.516200 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 20 19:42:13.516428 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 19:42:13.542569 locksmithd[1535]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:42:13.684052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:42:13.788690 containerd[1512]: time="2025-06-20T19:42:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:42:13.798163 containerd[1512]: time="2025-06-20T19:42:13.798115040Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:42:13.837227 containerd[1512]: time="2025-06-20T19:42:13.836146908Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.51µs" Jun 20 19:42:13.837349 containerd[1512]: time="2025-06-20T19:42:13.837329997Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:42:13.837759 containerd[1512]: time="2025-06-20T19:42:13.837739365Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:42:13.839189 containerd[1512]: time="2025-06-20T19:42:13.837951954Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:42:13.839189 containerd[1512]: time="2025-06-20T19:42:13.837976119Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:42:13.839189 containerd[1512]: time="2025-06-20T19:42:13.838003310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:42:13.839189 containerd[1512]: time="2025-06-20T19:42:13.838067721Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:42:13.839189 containerd[1512]: time="2025-06-20T19:42:13.838085354Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843313159Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843332986Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843353073Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843363323Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843440417Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843623240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843651012Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843662895Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843693332Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843916861Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:42:13.844102 containerd[1512]: time="2025-06-20T19:42:13.843973277Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:42:13.858216 containerd[1512]: time="2025-06-20T19:42:13.858170854Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858320284Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858342035Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858429189Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858451430Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858474844Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858488360Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858500763Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858512806Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858523315Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858533795Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858547190Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858640685Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858665742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:42:13.859197 containerd[1512]: time="2025-06-20T19:42:13.858680390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858690970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858704124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858716137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858728169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858738890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858750602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858761692Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858772813Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858828808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858843255Z" level=info msg="Start snapshots syncer" Jun 20 19:42:13.859503 containerd[1512]: time="2025-06-20T19:42:13.858858995Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:42:13.859752 containerd[1512]: time="2025-06-20T19:42:13.859077545Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:42:13.859752 containerd[1512]: time="2025-06-20T19:42:13.859131917Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:42:13.860223 containerd[1512]: time="2025-06-20T19:42:13.860204208Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:42:13.862416 containerd[1512]: time="2025-06-20T19:42:13.861706495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:42:13.862416 containerd[1512]: time="2025-06-20T19:42:13.861735910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:42:13.862416 containerd[1512]: time="2025-06-20T19:42:13.861770024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:42:13.862416 containerd[1512]: time="2025-06-20T19:42:13.861785263Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:42:13.862416 containerd[1512]: time="2025-06-20T19:42:13.861798247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:42:13.862416 containerd[1512]: time="2025-06-20T19:42:13.861809819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:42:13.862416 containerd[1512]: time="2025-06-20T19:42:13.861820850Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:42:13.862416 containerd[1512]: time="2025-06-20T19:42:13.861863750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:42:13.862416 containerd[1512]: time="2025-06-20T19:42:13.861885000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:42:13.862416 containerd[1512]: time="2025-06-20T19:42:13.861899227Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863507192Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863536437Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863548600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863617489Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863627829Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863638889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863650361Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863686369Z" level=info msg="runtime interface created" Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863694163Z" level=info msg="created NRI interface" Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863703310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863717337Z" level=info msg="Connect containerd service" Jun 20 19:42:13.863968 containerd[1512]: time="2025-06-20T19:42:13.863745700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:42:13.871006 containerd[1512]: time="2025-06-20T19:42:13.869482509Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:42:14.030982 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:42:14.039623 systemd-logind[1497]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:42:14.063215 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 20 19:42:14.064208 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 20 19:42:14.170780 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:42:14.171838 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 20 19:42:14.171874 kernel: [drm] features: -context_init Jun 20 19:42:14.173575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:42:14.173821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:42:14.177571 kernel: [drm] number of scanouts: 1 Jun 20 19:42:14.177600 kernel: [drm] number of cap sets: 0 Jun 20 19:42:14.177864 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:42:14.181522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:42:14.182216 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jun 20 19:42:14.231991 containerd[1512]: time="2025-06-20T19:42:14.231942403Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:42:14.232092 containerd[1512]: time="2025-06-20T19:42:14.232007328Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:42:14.232092 containerd[1512]: time="2025-06-20T19:42:14.232026658Z" level=info msg="Start subscribing containerd event" Jun 20 19:42:14.232092 containerd[1512]: time="2025-06-20T19:42:14.232051891Z" level=info msg="Start recovering state" Jun 20 19:42:14.232159 containerd[1512]: time="2025-06-20T19:42:14.232122101Z" level=info msg="Start event monitor" Jun 20 19:42:14.232159 containerd[1512]: time="2025-06-20T19:42:14.232137107Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:42:14.232159 containerd[1512]: time="2025-06-20T19:42:14.232144621Z" level=info msg="Start streaming server" Jun 20 19:42:14.232159 containerd[1512]: time="2025-06-20T19:42:14.232152610Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:42:14.232282 containerd[1512]: time="2025-06-20T19:42:14.232160680Z" level=info msg="runtime interface starting up..." Jun 20 19:42:14.232282 containerd[1512]: time="2025-06-20T19:42:14.232166866Z" level=info msg="starting plugins..." Jun 20 19:42:14.232282 containerd[1512]: time="2025-06-20T19:42:14.232179605Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:42:14.232384 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:42:14.234233 containerd[1512]: time="2025-06-20T19:42:14.233375556Z" level=info msg="containerd successfully booted in 0.445880s" Jun 20 19:42:14.266934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:42:14.431096 tar[1501]: linux-amd64/README.md Jun 20 19:42:14.451654 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:42:14.462869 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:42:14.483307 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:42:14.485688 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:42:14.503027 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:42:14.503264 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:42:14.505757 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:42:14.521696 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:42:14.523673 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:42:14.526479 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:42:14.526808 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:42:14.806949 systemd-networkd[1427]: eth0: Gained IPv6LL Jun 20 19:42:14.813149 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:42:14.815763 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:42:14.821020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:42:14.824796 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:42:14.887028 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:42:15.232614 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:15.232690 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:16.865837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:42:16.878968 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:42:17.253279 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:17.258244 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:18.220399 kubelet[1653]: E0620 19:42:18.220258 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:42:18.227081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:42:18.228176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:42:18.230359 systemd[1]: kubelet.service: Consumed 2.300s CPU time, 264.3M memory peak. Jun 20 19:42:19.307322 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:42:19.314073 systemd[1]: Started sshd@0-172.24.4.229:22-172.24.4.1:44386.service - OpenSSH per-connection server daemon (172.24.4.1:44386). Jun 20 19:42:19.645593 login[1632]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:42:19.650390 login[1633]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:42:19.702948 systemd-logind[1497]: New session 2 of user core. Jun 20 19:42:19.710862 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:42:19.713969 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:42:19.718853 systemd-logind[1497]: New session 1 of user core. Jun 20 19:42:19.738428 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:42:19.742816 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:42:19.756716 (systemd)[1671]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:42:19.759511 systemd-logind[1497]: New session c1 of user core. Jun 20 19:42:19.963578 systemd[1671]: Queued start job for default target default.target. Jun 20 19:42:19.971983 systemd[1671]: Created slice app.slice - User Application Slice. Jun 20 19:42:19.972023 systemd[1671]: Reached target paths.target - Paths. Jun 20 19:42:19.972094 systemd[1671]: Reached target timers.target - Timers. Jun 20 19:42:19.974742 systemd[1671]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:42:20.025515 systemd[1671]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:42:20.025915 systemd[1671]: Reached target sockets.target - Sockets. Jun 20 19:42:20.026067 systemd[1671]: Reached target basic.target - Basic System. Jun 20 19:42:20.026240 systemd[1671]: Reached target default.target - Main User Target. Jun 20 19:42:20.026337 systemd[1671]: Startup finished in 258ms. Jun 20 19:42:20.026676 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:42:20.040718 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:42:20.043105 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:42:20.593042 sshd[1663]: Accepted publickey for core from 172.24.4.1 port 44386 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:42:20.600555 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:42:20.618283 systemd-logind[1497]: New session 3 of user core. Jun 20 19:42:20.628584 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:42:21.291307 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:21.291630 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:42:21.336819 systemd[1]: Started sshd@1-172.24.4.229:22-172.24.4.1:44400.service - OpenSSH per-connection server daemon (172.24.4.1:44400). Jun 20 19:42:21.364544 coreos-metadata[1480]: Jun 20 19:42:21.364 WARN failed to locate config-drive, using the metadata service API instead Jun 20 19:42:21.380382 coreos-metadata[1565]: Jun 20 19:42:21.380 WARN failed to locate config-drive, using the metadata service API instead Jun 20 19:42:21.391919 coreos-metadata[1480]: Jun 20 19:42:21.391 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jun 20 19:42:21.409236 coreos-metadata[1565]: Jun 20 19:42:21.409 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jun 20 19:42:21.565449 coreos-metadata[1565]: Jun 20 19:42:21.565 INFO Fetch successful Jun 20 19:42:21.566048 coreos-metadata[1565]: Jun 20 19:42:21.565 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 20 19:42:21.581154 coreos-metadata[1565]: Jun 20 19:42:21.581 INFO Fetch successful Jun 20 19:42:21.588350 unknown[1565]: wrote ssh authorized keys file for user: core Jun 20 19:42:21.654557 coreos-metadata[1480]: Jun 20 19:42:21.654 INFO Fetch successful Jun 20 19:42:21.654940 coreos-metadata[1480]: Jun 20 19:42:21.654 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 20 19:42:21.671918 coreos-metadata[1480]: Jun 20 19:42:21.671 INFO Fetch successful Jun 20 19:42:21.672232 coreos-metadata[1480]: Jun 20 19:42:21.672 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jun 20 19:42:21.675161 update-ssh-keys[1710]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:42:21.677623 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:42:21.681534 systemd[1]: Finished sshkeys.service. Jun 20 19:42:21.687933 coreos-metadata[1480]: Jun 20 19:42:21.687 INFO Fetch successful Jun 20 19:42:21.688988 coreos-metadata[1480]: Jun 20 19:42:21.688 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jun 20 19:42:21.703216 coreos-metadata[1480]: Jun 20 19:42:21.703 INFO Fetch successful Jun 20 19:42:21.703380 coreos-metadata[1480]: Jun 20 19:42:21.703 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jun 20 19:42:21.722860 coreos-metadata[1480]: Jun 20 19:42:21.722 INFO Fetch successful Jun 20 19:42:21.722860 coreos-metadata[1480]: Jun 20 19:42:21.722 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jun 20 19:42:21.741274 coreos-metadata[1480]: Jun 20 19:42:21.741 INFO Fetch successful Jun 20 19:42:21.813960 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:42:21.815570 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:42:21.815949 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:42:21.816906 systemd[1]: Startup finished in 3.796s (kernel) + 17.508s (initrd) + 11.664s (userspace) = 32.969s. Jun 20 19:42:22.974748 sshd[1706]: Accepted publickey for core from 172.24.4.1 port 44400 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:42:22.979744 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:42:23.001315 systemd-logind[1497]: New session 4 of user core. Jun 20 19:42:23.012522 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:42:23.654259 sshd[1719]: Connection closed by 172.24.4.1 port 44400 Jun 20 19:42:23.653943 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jun 20 19:42:23.681916 systemd[1]: sshd@1-172.24.4.229:22-172.24.4.1:44400.service: Deactivated successfully. Jun 20 19:42:23.686015 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:42:23.688902 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:42:23.696409 systemd[1]: Started sshd@2-172.24.4.229:22-172.24.4.1:33510.service - OpenSSH per-connection server daemon (172.24.4.1:33510). Jun 20 19:42:23.700912 systemd-logind[1497]: Removed session 4. Jun 20 19:42:25.160357 sshd[1725]: Accepted publickey for core from 172.24.4.1 port 33510 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:42:25.165518 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:42:25.188268 systemd-logind[1497]: New session 5 of user core. Jun 20 19:42:25.202585 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:42:25.880264 sshd[1727]: Connection closed by 172.24.4.1 port 33510 Jun 20 19:42:25.882598 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Jun 20 19:42:25.910801 systemd[1]: sshd@2-172.24.4.229:22-172.24.4.1:33510.service: Deactivated successfully. Jun 20 19:42:25.921324 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:42:25.927275 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:42:25.937464 systemd[1]: Started sshd@3-172.24.4.229:22-172.24.4.1:33512.service - OpenSSH per-connection server daemon (172.24.4.1:33512). Jun 20 19:42:25.941241 systemd-logind[1497]: Removed session 5. Jun 20 19:42:27.181574 sshd[1733]: Accepted publickey for core from 172.24.4.1 port 33512 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:42:27.185079 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:42:27.196931 systemd-logind[1497]: New session 6 of user core. Jun 20 19:42:27.218627 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:42:27.903235 sshd[1735]: Connection closed by 172.24.4.1 port 33512 Jun 20 19:42:27.904413 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jun 20 19:42:27.920117 systemd[1]: sshd@3-172.24.4.229:22-172.24.4.1:33512.service: Deactivated successfully. Jun 20 19:42:27.924326 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:42:27.926733 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:42:27.935079 systemd[1]: Started sshd@4-172.24.4.229:22-172.24.4.1:33526.service - OpenSSH per-connection server daemon (172.24.4.1:33526). Jun 20 19:42:27.938169 systemd-logind[1497]: Removed session 6. Jun 20 19:42:28.256052 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:42:28.261836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:42:28.717599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:42:28.731517 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:42:28.948428 kubelet[1751]: E0620 19:42:28.948287 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:42:28.961506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:42:28.962446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:42:28.963979 systemd[1]: kubelet.service: Consumed 567ms CPU time, 111.2M memory peak. Jun 20 19:42:29.422405 sshd[1741]: Accepted publickey for core from 172.24.4.1 port 33526 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:42:29.424297 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:42:29.438298 systemd-logind[1497]: New session 7 of user core. Jun 20 19:42:29.448575 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:42:29.917087 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:42:29.917860 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:42:29.942806 sudo[1759]: pam_unix(sudo:session): session closed for user root Jun 20 19:42:30.127247 sshd[1758]: Connection closed by 172.24.4.1 port 33526 Jun 20 19:42:30.128392 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jun 20 19:42:30.147721 systemd[1]: sshd@4-172.24.4.229:22-172.24.4.1:33526.service: Deactivated successfully. Jun 20 19:42:30.152571 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:42:30.155805 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:42:30.163056 systemd[1]: Started sshd@5-172.24.4.229:22-172.24.4.1:33530.service - OpenSSH per-connection server daemon (172.24.4.1:33530). Jun 20 19:42:30.165983 systemd-logind[1497]: Removed session 7. Jun 20 19:42:31.269504 sshd[1765]: Accepted publickey for core from 172.24.4.1 port 33530 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:42:31.273675 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:42:31.285518 systemd-logind[1497]: New session 8 of user core. Jun 20 19:42:31.307695 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:42:31.689550 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:42:31.691512 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:42:31.710577 sudo[1769]: pam_unix(sudo:session): session closed for user root Jun 20 19:42:31.724856 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:42:31.726450 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:42:31.745395 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:42:31.828596 augenrules[1791]: No rules Jun 20 19:42:31.831416 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:42:31.831863 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:42:31.833444 sudo[1768]: pam_unix(sudo:session): session closed for user root Jun 20 19:42:32.087246 sshd[1767]: Connection closed by 172.24.4.1 port 33530 Jun 20 19:42:32.087894 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jun 20 19:42:32.101365 systemd[1]: sshd@5-172.24.4.229:22-172.24.4.1:33530.service: Deactivated successfully. Jun 20 19:42:32.104802 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:42:32.106826 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:42:32.113124 systemd[1]: Started sshd@6-172.24.4.229:22-172.24.4.1:33536.service - OpenSSH per-connection server daemon (172.24.4.1:33536). Jun 20 19:42:32.116001 systemd-logind[1497]: Removed session 8. Jun 20 19:42:33.648714 sshd[1800]: Accepted publickey for core from 172.24.4.1 port 33536 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:42:33.651426 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:42:33.662514 systemd-logind[1497]: New session 9 of user core. Jun 20 19:42:33.673686 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:42:34.146687 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:42:34.147384 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:42:35.283948 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:42:35.305494 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:42:35.764166 dockerd[1820]: time="2025-06-20T19:42:35.764084875Z" level=info msg="Starting up" Jun 20 19:42:35.769708 dockerd[1820]: time="2025-06-20T19:42:35.769662827Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:42:35.897956 systemd[1]: var-lib-docker-metacopy\x2dcheck2133697707-merged.mount: Deactivated successfully. Jun 20 19:42:35.930923 dockerd[1820]: time="2025-06-20T19:42:35.930363916Z" level=info msg="Loading containers: start." Jun 20 19:42:35.960236 kernel: Initializing XFRM netlink socket Jun 20 19:42:36.392054 systemd-networkd[1427]: docker0: Link UP Jun 20 19:42:36.397648 dockerd[1820]: time="2025-06-20T19:42:36.397612511Z" level=info msg="Loading containers: done." Jun 20 19:42:36.425210 dockerd[1820]: time="2025-06-20T19:42:36.424800029Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:42:36.425210 dockerd[1820]: time="2025-06-20T19:42:36.424889416Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:42:36.425210 dockerd[1820]: time="2025-06-20T19:42:36.425012336Z" level=info msg="Initializing buildkit" Jun 20 19:42:36.425802 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3501437584-merged.mount: Deactivated successfully. Jun 20 19:42:36.485214 dockerd[1820]: time="2025-06-20T19:42:36.485085027Z" level=info msg="Completed buildkit initialization" Jun 20 19:42:36.502166 dockerd[1820]: time="2025-06-20T19:42:36.502016812Z" level=info msg="Daemon has completed initialization" Jun 20 19:42:36.503274 dockerd[1820]: time="2025-06-20T19:42:36.502485194Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:42:36.503083 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:42:38.259119 containerd[1512]: time="2025-06-20T19:42:38.258561476Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:42:39.007854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:42:39.016228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:42:39.091812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217320681.mount: Deactivated successfully. Jun 20 19:42:39.483482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:42:39.502060 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:42:39.606509 kubelet[2035]: E0620 19:42:39.606426 2035 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:42:39.609880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:42:39.610039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:42:39.610446 systemd[1]: kubelet.service: Consumed 352ms CPU time, 110.4M memory peak. Jun 20 19:42:41.338434 containerd[1512]: time="2025-06-20T19:42:41.338330059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:41.340139 containerd[1512]: time="2025-06-20T19:42:41.340086522Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jun 20 19:42:41.341630 containerd[1512]: time="2025-06-20T19:42:41.341549197Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:41.345378 containerd[1512]: time="2025-06-20T19:42:41.345325682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:41.346615 containerd[1512]: time="2025-06-20T19:42:41.346571267Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 3.086141204s" Jun 20 19:42:41.346731 containerd[1512]: time="2025-06-20T19:42:41.346711251Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:42:41.347974 containerd[1512]: time="2025-06-20T19:42:41.347955293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:42:43.568421 containerd[1512]: time="2025-06-20T19:42:43.568252037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:43.572523 containerd[1512]: time="2025-06-20T19:42:43.572386427Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jun 20 19:42:43.575772 containerd[1512]: time="2025-06-20T19:42:43.575683309Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:43.581826 containerd[1512]: time="2025-06-20T19:42:43.581719677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:43.584566 containerd[1512]: time="2025-06-20T19:42:43.584243549Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 2.236135798s" Jun 20 19:42:43.584566 containerd[1512]: time="2025-06-20T19:42:43.584335492Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:42:43.585961 containerd[1512]: time="2025-06-20T19:42:43.585141071Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:42:45.529461 containerd[1512]: time="2025-06-20T19:42:45.529387304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:45.530994 containerd[1512]: time="2025-06-20T19:42:45.530970992Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jun 20 19:42:45.535091 containerd[1512]: time="2025-06-20T19:42:45.535047318Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:45.539794 containerd[1512]: time="2025-06-20T19:42:45.539748101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:45.542282 containerd[1512]: time="2025-06-20T19:42:45.542170945Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.95694181s" Jun 20 19:42:45.542282 containerd[1512]: time="2025-06-20T19:42:45.542231289Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:42:45.542917 containerd[1512]: time="2025-06-20T19:42:45.542854343Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:42:47.234469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378275160.mount: Deactivated successfully. Jun 20 19:42:47.977936 containerd[1512]: time="2025-06-20T19:42:47.977827647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:47.984205 containerd[1512]: time="2025-06-20T19:42:47.983401414Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jun 20 19:42:47.984205 containerd[1512]: time="2025-06-20T19:42:47.983605890Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:47.985772 containerd[1512]: time="2025-06-20T19:42:47.985745777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:47.986417 containerd[1512]: time="2025-06-20T19:42:47.986382353Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.443436814s" Jun 20 19:42:47.986550 containerd[1512]: time="2025-06-20T19:42:47.986417312Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:42:47.987155 containerd[1512]: time="2025-06-20T19:42:47.987125231Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:42:48.682050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472794518.mount: Deactivated successfully. Jun 20 19:42:49.755823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:42:49.764745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:42:50.269317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:42:50.278557 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:42:50.385215 kubelet[2168]: E0620 19:42:50.384130 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:42:50.387639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:42:50.387907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:42:50.389015 systemd[1]: kubelet.service: Consumed 273ms CPU time, 110.1M memory peak. Jun 20 19:42:50.480851 containerd[1512]: time="2025-06-20T19:42:50.480767795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:50.482901 containerd[1512]: time="2025-06-20T19:42:50.482867855Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 20 19:42:50.484122 containerd[1512]: time="2025-06-20T19:42:50.484063943Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:50.489195 containerd[1512]: time="2025-06-20T19:42:50.489146813Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.501991082s" Jun 20 19:42:50.489395 containerd[1512]: time="2025-06-20T19:42:50.489341087Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:42:50.489590 containerd[1512]: time="2025-06-20T19:42:50.489303562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:50.491010 containerd[1512]: time="2025-06-20T19:42:50.490898618Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:42:51.096055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786130372.mount: Deactivated successfully. Jun 20 19:42:51.107791 containerd[1512]: time="2025-06-20T19:42:51.107721651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:42:51.111238 containerd[1512]: time="2025-06-20T19:42:51.111131926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 19:42:51.111694 containerd[1512]: time="2025-06-20T19:42:51.111634283Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:42:51.117118 containerd[1512]: time="2025-06-20T19:42:51.117025194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:42:51.119278 containerd[1512]: time="2025-06-20T19:42:51.119170071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 628.230463ms" Jun 20 19:42:51.119539 containerd[1512]: time="2025-06-20T19:42:51.119493827Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:42:51.120536 containerd[1512]: time="2025-06-20T19:42:51.120409849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:42:51.789841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298189603.mount: Deactivated successfully. Jun 20 19:42:54.906751 containerd[1512]: time="2025-06-20T19:42:54.906653831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:54.908195 containerd[1512]: time="2025-06-20T19:42:54.907924903Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jun 20 19:42:54.909468 containerd[1512]: time="2025-06-20T19:42:54.909433379Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:54.912920 containerd[1512]: time="2025-06-20T19:42:54.912889677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:42:54.914465 containerd[1512]: time="2025-06-20T19:42:54.914413243Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.793547107s" Jun 20 19:42:54.914520 containerd[1512]: time="2025-06-20T19:42:54.914467649Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:42:58.861113 update_engine[1498]: I20250620 19:42:58.860819 1498 update_attempter.cc:509] Updating boot flags... Jun 20 19:42:58.904900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:42:58.905095 systemd[1]: kubelet.service: Consumed 273ms CPU time, 110.1M memory peak. Jun 20 19:42:58.908550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:42:58.962350 systemd[1]: Reload requested from client PID 2272 ('systemctl') (unit session-9.scope)... Jun 20 19:42:58.962400 systemd[1]: Reloading... Jun 20 19:42:59.111506 zram_generator::config[2322]: No configuration found. Jun 20 19:42:59.253987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:42:59.402455 systemd[1]: Reloading finished in 439 ms. Jun 20 19:42:59.499434 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:42:59.529758 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:42:59.538706 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:42:59.538959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:42:59.539011 systemd[1]: kubelet.service: Consumed 179ms CPU time, 104.4M memory peak. Jun 20 19:42:59.541503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:43:00.331412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:43:00.339465 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:43:00.435173 kubelet[2394]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:43:00.436348 kubelet[2394]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:43:00.436348 kubelet[2394]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:43:00.436348 kubelet[2394]: I0620 19:43:00.435917 2394 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:43:01.892275 kubelet[2394]: I0620 19:43:01.892160 2394 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:43:01.892275 kubelet[2394]: I0620 19:43:01.892269 2394 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:43:01.893037 kubelet[2394]: I0620 19:43:01.892993 2394 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:43:01.929200 kubelet[2394]: I0620 19:43:01.929101 2394 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:43:01.929200 kubelet[2394]: E0620 19:43:01.929132 2394 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.229:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:43:01.945378 kubelet[2394]: I0620 19:43:01.945318 2394 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:43:01.953201 kubelet[2394]: I0620 19:43:01.953038 2394 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:43:01.957213 kubelet[2394]: I0620 19:43:01.957163 2394 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:43:01.958256 kubelet[2394]: I0620 19:43:01.957355 2394 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-1-0-8-afb8bdccbb.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:43:01.958256 kubelet[2394]: I0620 19:43:01.957942 2394 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:43:01.958256 kubelet[2394]: I0620 19:43:01.957957 2394 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:43:01.958256 kubelet[2394]: I0620 19:43:01.958123 2394 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:43:01.963584 kubelet[2394]: I0620 19:43:01.963567 2394 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:43:01.964079 kubelet[2394]: I0620 19:43:01.963752 2394 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:43:01.964079 kubelet[2394]: I0620 19:43:01.963810 2394 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:43:01.964079 kubelet[2394]: I0620 19:43:01.963839 2394 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:43:01.977694 kubelet[2394]: W0620 19:43:01.977639 2394 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.229:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused Jun 20 19:43:01.977851 kubelet[2394]: E0620 19:43:01.977830 2394 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.229:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:43:01.979098 kubelet[2394]: I0620 19:43:01.978370 2394 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:43:01.979098 kubelet[2394]: W0620 19:43:01.978773 2394 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.229:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-0-8-afb8bdccbb.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused Jun 20 19:43:01.979098 kubelet[2394]: E0620 19:43:01.978897 2394 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.229:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-0-8-afb8bdccbb.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:43:01.979098 kubelet[2394]: I0620 19:43:01.978932 2394 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:43:01.979098 kubelet[2394]: W0620 19:43:01.979024 2394 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:43:01.981904 kubelet[2394]: I0620 19:43:01.981886 2394 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:43:01.982026 kubelet[2394]: I0620 19:43:01.982015 2394 server.go:1287] "Started kubelet" Jun 20 19:43:01.984245 kubelet[2394]: I0620 19:43:01.984229 2394 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:43:01.991325 kubelet[2394]: E0620 19:43:01.989308 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.229:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.229:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344-1-0-8-afb8bdccbb.novalocal.184ad7b6069f3c5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-1-0-8-afb8bdccbb.novalocal,UID:ci-4344-1-0-8-afb8bdccbb.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-1-0-8-afb8bdccbb.novalocal,},FirstTimestamp:2025-06-20 19:43:01.981977695 +0000 UTC m=+1.635738707,LastTimestamp:2025-06-20 19:43:01.981977695 +0000 UTC m=+1.635738707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-1-0-8-afb8bdccbb.novalocal,}" Jun 20 19:43:01.994025 kubelet[2394]: I0620 19:43:01.993994 2394 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:43:01.995338 kubelet[2394]: I0620 19:43:01.995300 2394 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:43:01.996200 kubelet[2394]: I0620 19:43:01.996045 2394 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:43:01.996784 kubelet[2394]: I0620 19:43:01.996721 2394 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:43:01.997492 kubelet[2394]: I0620 19:43:01.997475 2394 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:43:01.997791 kubelet[2394]: E0620 19:43:01.996976 2394 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" Jun 20 19:43:01.998022 kubelet[2394]: I0620 19:43:01.998002 2394 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:43:02.000772 kubelet[2394]: E0620 19:43:02.000429 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.229:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-8-afb8bdccbb.novalocal?timeout=10s\": dial tcp 172.24.4.229:6443: connect: connection refused" interval="200ms" Jun 20 19:43:02.001150 kubelet[2394]: I0620 19:43:02.001055 2394 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:43:02.001429 kubelet[2394]: I0620 19:43:02.001311 2394 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:43:02.004032 kubelet[2394]: I0620 19:43:02.004007 2394 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:43:02.005146 kubelet[2394]: I0620 19:43:02.004625 2394 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:43:02.005146 kubelet[2394]: I0620 19:43:02.004821 2394 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:43:02.019701 kubelet[2394]: I0620 19:43:02.019658 2394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:43:02.020868 kubelet[2394]: I0620 19:43:02.020851 2394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:43:02.020981 kubelet[2394]: I0620 19:43:02.020968 2394 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:43:02.021086 kubelet[2394]: I0620 19:43:02.021073 2394 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:43:02.021204 kubelet[2394]: I0620 19:43:02.021192 2394 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:43:02.021337 kubelet[2394]: E0620 19:43:02.021305 2394 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:43:02.032463 kubelet[2394]: E0620 19:43:02.032402 2394 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:43:02.032940 kubelet[2394]: W0620 19:43:02.032861 2394 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.229:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused Jun 20 19:43:02.033021 kubelet[2394]: E0620 19:43:02.032915 2394 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.229:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:43:02.039212 kubelet[2394]: W0620 19:43:02.038751 2394 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.229:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused Jun 20 19:43:02.039365 kubelet[2394]: E0620 19:43:02.039343 2394 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.229:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:43:02.042643 kubelet[2394]: I0620 19:43:02.042583 2394 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:43:02.042643 kubelet[2394]: I0620 19:43:02.042637 2394 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:43:02.042729 kubelet[2394]: I0620 19:43:02.042692 2394 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:43:02.047652 kubelet[2394]: I0620 19:43:02.047609 2394 policy_none.go:49] "None policy: Start" Jun 20 19:43:02.047698 kubelet[2394]: I0620 19:43:02.047687 2394 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:43:02.047764 kubelet[2394]: I0620 19:43:02.047742 2394 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:43:02.058639 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:43:02.072126 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:43:02.077616 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:43:02.090074 kubelet[2394]: I0620 19:43:02.090050 2394 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:43:02.091920 kubelet[2394]: I0620 19:43:02.091903 2394 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:43:02.093111 kubelet[2394]: I0620 19:43:02.092547 2394 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:43:02.094154 kubelet[2394]: I0620 19:43:02.093767 2394 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:43:02.096515 kubelet[2394]: E0620 19:43:02.096475 2394 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:43:02.096665 kubelet[2394]: E0620 19:43:02.096631 2394 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" Jun 20 19:43:02.134789 systemd[1]: Created slice kubepods-burstable-podc499a0e4dc72ddd30e16bd682f95ed0b.slice - libcontainer container kubepods-burstable-podc499a0e4dc72ddd30e16bd682f95ed0b.slice. Jun 20 19:43:02.146018 kubelet[2394]: E0620 19:43:02.144442 2394 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.150598 systemd[1]: Created slice kubepods-burstable-pod5e5a5a43475fdd6885709ff298547765.slice - libcontainer container kubepods-burstable-pod5e5a5a43475fdd6885709ff298547765.slice. Jun 20 19:43:02.159957 kubelet[2394]: E0620 19:43:02.159906 2394 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.163275 systemd[1]: Created slice kubepods-burstable-pod9d0a4eb2989671422410a02ed8a0bade.slice - libcontainer container kubepods-burstable-pod9d0a4eb2989671422410a02ed8a0bade.slice. Jun 20 19:43:02.166529 kubelet[2394]: E0620 19:43:02.166486 2394 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.196272 kubelet[2394]: I0620 19:43:02.196220 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.197472 kubelet[2394]: E0620 19:43:02.197408 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.229:6443/api/v1/nodes\": dial tcp 172.24.4.229:6443: connect: connection refused" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.202079 kubelet[2394]: E0620 19:43:02.201992 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.229:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-8-afb8bdccbb.novalocal?timeout=10s\": dial tcp 172.24.4.229:6443: connect: connection refused" interval="400ms" Jun 20 19:43:02.206210 kubelet[2394]: I0620 19:43:02.205888 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e5a5a43475fdd6885709ff298547765-kubeconfig\") pod \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"5e5a5a43475fdd6885709ff298547765\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.206210 kubelet[2394]: I0620 19:43:02.205954 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d0a4eb2989671422410a02ed8a0bade-kubeconfig\") pod \"kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"9d0a4eb2989671422410a02ed8a0bade\") " pod="kube-system/kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.206210 kubelet[2394]: I0620 19:43:02.205993 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c499a0e4dc72ddd30e16bd682f95ed0b-ca-certs\") pod \"kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"c499a0e4dc72ddd30e16bd682f95ed0b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.206210 kubelet[2394]: I0620 19:43:02.206029 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c499a0e4dc72ddd30e16bd682f95ed0b-k8s-certs\") pod \"kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"c499a0e4dc72ddd30e16bd682f95ed0b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.206210 kubelet[2394]: I0620 19:43:02.206063 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5e5a5a43475fdd6885709ff298547765-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"5e5a5a43475fdd6885709ff298547765\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.206847 kubelet[2394]: I0620 19:43:02.206101 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e5a5a43475fdd6885709ff298547765-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"5e5a5a43475fdd6885709ff298547765\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.206847 kubelet[2394]: I0620 19:43:02.206139 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c499a0e4dc72ddd30e16bd682f95ed0b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"c499a0e4dc72ddd30e16bd682f95ed0b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.206847 kubelet[2394]: I0620 19:43:02.206538 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e5a5a43475fdd6885709ff298547765-ca-certs\") pod \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"5e5a5a43475fdd6885709ff298547765\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.206847 kubelet[2394]: I0620 19:43:02.206661 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e5a5a43475fdd6885709ff298547765-k8s-certs\") pod \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"5e5a5a43475fdd6885709ff298547765\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.401602 kubelet[2394]: I0620 19:43:02.401317 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.402597 kubelet[2394]: E0620 19:43:02.401922 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.229:6443/api/v1/nodes\": dial tcp 172.24.4.229:6443: connect: connection refused" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.449194 containerd[1512]: time="2025-06-20T19:43:02.449064620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal,Uid:c499a0e4dc72ddd30e16bd682f95ed0b,Namespace:kube-system,Attempt:0,}" Jun 20 19:43:02.462320 containerd[1512]: time="2025-06-20T19:43:02.462162920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal,Uid:5e5a5a43475fdd6885709ff298547765,Namespace:kube-system,Attempt:0,}" Jun 20 19:43:02.468734 containerd[1512]: time="2025-06-20T19:43:02.468663105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal,Uid:9d0a4eb2989671422410a02ed8a0bade,Namespace:kube-system,Attempt:0,}" Jun 20 19:43:02.571583 containerd[1512]: time="2025-06-20T19:43:02.571436513Z" level=info msg="connecting to shim 81174ff375c44911bbde1328bb1d19b62c46fc7ceb442a4fa6c275ad93aafb13" address="unix:///run/containerd/s/bc2e1f90cae8c0d54dfd319320643f8a104a3251848b99db1a2a0f5d342b7029" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:43:02.587268 containerd[1512]: time="2025-06-20T19:43:02.585934129Z" level=info msg="connecting to shim d25d82aa40ea19901c2f227f436c59bf6adca0fe866097bc4eee57d8dde9656c" address="unix:///run/containerd/s/3a20b2c28deb07b1ebc08b834fc1189e7d4159cd502f336dabca1adea9d11781" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:43:02.588171 containerd[1512]: time="2025-06-20T19:43:02.588135611Z" level=info msg="connecting to shim 9117798e82df29578e92f871e62eb25267485e766a9ea9ef1504efc434d23654" address="unix:///run/containerd/s/5484d0e347f4cdcbd646c4789b6836a6791b5b1bb2230cd65599c79aade7e758" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:43:02.603948 kubelet[2394]: E0620 19:43:02.603247 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.229:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-8-afb8bdccbb.novalocal?timeout=10s\": dial tcp 172.24.4.229:6443: connect: connection refused" interval="800ms" Jun 20 19:43:02.624455 systemd[1]: Started cri-containerd-81174ff375c44911bbde1328bb1d19b62c46fc7ceb442a4fa6c275ad93aafb13.scope - libcontainer container 81174ff375c44911bbde1328bb1d19b62c46fc7ceb442a4fa6c275ad93aafb13. Jun 20 19:43:02.636373 systemd[1]: Started cri-containerd-9117798e82df29578e92f871e62eb25267485e766a9ea9ef1504efc434d23654.scope - libcontainer container 9117798e82df29578e92f871e62eb25267485e766a9ea9ef1504efc434d23654. Jun 20 19:43:02.657356 systemd[1]: Started cri-containerd-d25d82aa40ea19901c2f227f436c59bf6adca0fe866097bc4eee57d8dde9656c.scope - libcontainer container d25d82aa40ea19901c2f227f436c59bf6adca0fe866097bc4eee57d8dde9656c. Jun 20 19:43:02.720804 containerd[1512]: time="2025-06-20T19:43:02.720761031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal,Uid:5e5a5a43475fdd6885709ff298547765,Namespace:kube-system,Attempt:0,} returns sandbox id \"9117798e82df29578e92f871e62eb25267485e766a9ea9ef1504efc434d23654\"" Jun 20 19:43:02.723217 containerd[1512]: time="2025-06-20T19:43:02.722560910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal,Uid:c499a0e4dc72ddd30e16bd682f95ed0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"81174ff375c44911bbde1328bb1d19b62c46fc7ceb442a4fa6c275ad93aafb13\"" Jun 20 19:43:02.727886 containerd[1512]: time="2025-06-20T19:43:02.727837037Z" level=info msg="CreateContainer within sandbox \"9117798e82df29578e92f871e62eb25267485e766a9ea9ef1504efc434d23654\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:43:02.728164 containerd[1512]: time="2025-06-20T19:43:02.727842156Z" level=info msg="CreateContainer within sandbox \"81174ff375c44911bbde1328bb1d19b62c46fc7ceb442a4fa6c275ad93aafb13\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:43:02.749903 containerd[1512]: time="2025-06-20T19:43:02.749859322Z" level=info msg="Container 9e3b02bc65695beb8b5a3f164e91d1589b7b65dd28f38f7ad654c45c282f8079: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:43:02.753541 containerd[1512]: time="2025-06-20T19:43:02.753476622Z" level=info msg="Container b1ce67893122a388605eb531d1f1c91556f92b53a1d286a7fda20f377892866e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:43:02.763078 containerd[1512]: time="2025-06-20T19:43:02.762936411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal,Uid:9d0a4eb2989671422410a02ed8a0bade,Namespace:kube-system,Attempt:0,} returns sandbox id \"d25d82aa40ea19901c2f227f436c59bf6adca0fe866097bc4eee57d8dde9656c\"" Jun 20 19:43:02.763770 containerd[1512]: time="2025-06-20T19:43:02.763658683Z" level=info msg="CreateContainer within sandbox \"9117798e82df29578e92f871e62eb25267485e766a9ea9ef1504efc434d23654\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9e3b02bc65695beb8b5a3f164e91d1589b7b65dd28f38f7ad654c45c282f8079\"" Jun 20 19:43:02.765210 containerd[1512]: time="2025-06-20T19:43:02.764363180Z" level=info msg="StartContainer for \"9e3b02bc65695beb8b5a3f164e91d1589b7b65dd28f38f7ad654c45c282f8079\"" Jun 20 19:43:02.766026 containerd[1512]: time="2025-06-20T19:43:02.766002770Z" level=info msg="connecting to shim 9e3b02bc65695beb8b5a3f164e91d1589b7b65dd28f38f7ad654c45c282f8079" address="unix:///run/containerd/s/5484d0e347f4cdcbd646c4789b6836a6791b5b1bb2230cd65599c79aade7e758" protocol=ttrpc version=3 Jun 20 19:43:02.767113 containerd[1512]: time="2025-06-20T19:43:02.767081718Z" level=info msg="CreateContainer within sandbox \"d25d82aa40ea19901c2f227f436c59bf6adca0fe866097bc4eee57d8dde9656c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:43:02.772745 containerd[1512]: time="2025-06-20T19:43:02.772718711Z" level=info msg="CreateContainer within sandbox \"81174ff375c44911bbde1328bb1d19b62c46fc7ceb442a4fa6c275ad93aafb13\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b1ce67893122a388605eb531d1f1c91556f92b53a1d286a7fda20f377892866e\"" Jun 20 19:43:02.773372 containerd[1512]: time="2025-06-20T19:43:02.773351800Z" level=info msg="StartContainer for \"b1ce67893122a388605eb531d1f1c91556f92b53a1d286a7fda20f377892866e\"" Jun 20 19:43:02.775683 containerd[1512]: time="2025-06-20T19:43:02.775657674Z" level=info msg="connecting to shim b1ce67893122a388605eb531d1f1c91556f92b53a1d286a7fda20f377892866e" address="unix:///run/containerd/s/bc2e1f90cae8c0d54dfd319320643f8a104a3251848b99db1a2a0f5d342b7029" protocol=ttrpc version=3 Jun 20 19:43:02.790398 systemd[1]: Started cri-containerd-9e3b02bc65695beb8b5a3f164e91d1589b7b65dd28f38f7ad654c45c282f8079.scope - libcontainer container 9e3b02bc65695beb8b5a3f164e91d1589b7b65dd28f38f7ad654c45c282f8079. Jun 20 19:43:02.793361 containerd[1512]: time="2025-06-20T19:43:02.793318745Z" level=info msg="Container 82dbd18606005c4a696e0b26b9feef32a67cbe99eea080c69fc047b2eb7449a6: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:43:02.805583 kubelet[2394]: I0620 19:43:02.805549 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.805894 kubelet[2394]: E0620 19:43:02.805861 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.229:6443/api/v1/nodes\": dial tcp 172.24.4.229:6443: connect: connection refused" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:02.810801 containerd[1512]: time="2025-06-20T19:43:02.810759152Z" level=info msg="CreateContainer within sandbox \"d25d82aa40ea19901c2f227f436c59bf6adca0fe866097bc4eee57d8dde9656c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"82dbd18606005c4a696e0b26b9feef32a67cbe99eea080c69fc047b2eb7449a6\"" Jun 20 19:43:02.811678 containerd[1512]: time="2025-06-20T19:43:02.811303631Z" level=info msg="StartContainer for \"82dbd18606005c4a696e0b26b9feef32a67cbe99eea080c69fc047b2eb7449a6\"" Jun 20 19:43:02.813233 containerd[1512]: time="2025-06-20T19:43:02.813149798Z" level=info msg="connecting to shim 82dbd18606005c4a696e0b26b9feef32a67cbe99eea080c69fc047b2eb7449a6" address="unix:///run/containerd/s/3a20b2c28deb07b1ebc08b834fc1189e7d4159cd502f336dabca1adea9d11781" protocol=ttrpc version=3 Jun 20 19:43:02.815347 systemd[1]: Started cri-containerd-b1ce67893122a388605eb531d1f1c91556f92b53a1d286a7fda20f377892866e.scope - libcontainer container b1ce67893122a388605eb531d1f1c91556f92b53a1d286a7fda20f377892866e. Jun 20 19:43:02.849604 systemd[1]: Started cri-containerd-82dbd18606005c4a696e0b26b9feef32a67cbe99eea080c69fc047b2eb7449a6.scope - libcontainer container 82dbd18606005c4a696e0b26b9feef32a67cbe99eea080c69fc047b2eb7449a6. Jun 20 19:43:02.884366 containerd[1512]: time="2025-06-20T19:43:02.884233031Z" level=info msg="StartContainer for \"9e3b02bc65695beb8b5a3f164e91d1589b7b65dd28f38f7ad654c45c282f8079\" returns successfully" Jun 20 19:43:02.917452 containerd[1512]: time="2025-06-20T19:43:02.917235124Z" level=info msg="StartContainer for \"b1ce67893122a388605eb531d1f1c91556f92b53a1d286a7fda20f377892866e\" returns successfully" Jun 20 19:43:02.955847 kubelet[2394]: W0620 19:43:02.955739 2394 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.229:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-0-8-afb8bdccbb.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.229:6443: connect: connection refused Jun 20 19:43:02.955847 kubelet[2394]: E0620 19:43:02.955817 2394 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.229:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-0-8-afb8bdccbb.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.229:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:43:02.995370 containerd[1512]: time="2025-06-20T19:43:02.995313126Z" level=info msg="StartContainer for \"82dbd18606005c4a696e0b26b9feef32a67cbe99eea080c69fc047b2eb7449a6\" returns successfully" Jun 20 19:43:03.045720 kubelet[2394]: E0620 19:43:03.045555 2394 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:03.050203 kubelet[2394]: E0620 19:43:03.050161 2394 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:03.057529 kubelet[2394]: E0620 19:43:03.057373 2394 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:03.608239 kubelet[2394]: I0620 19:43:03.607916 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:04.055778 kubelet[2394]: E0620 19:43:04.055749 2394 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:04.058188 kubelet[2394]: E0620 19:43:04.056322 2394 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:04.780197 kubelet[2394]: E0620 19:43:04.780129 2394 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:04.913640 kubelet[2394]: E0620 19:43:04.913592 2394 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:04.980815 kubelet[2394]: I0620 19:43:04.980474 2394 apiserver.go:52] "Watching apiserver" Jun 20 19:43:05.005019 kubelet[2394]: I0620 19:43:05.004972 2394 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:43:05.038669 kubelet[2394]: I0620 19:43:05.037808 2394 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:05.038669 kubelet[2394]: E0620 19:43:05.037873 2394 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344-1-0-8-afb8bdccbb.novalocal\": node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" not found" Jun 20 19:43:05.098578 kubelet[2394]: I0620 19:43:05.098162 2394 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:05.116506 kubelet[2394]: E0620 19:43:05.116392 2394 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:05.116506 kubelet[2394]: I0620 19:43:05.116449 2394 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:05.120896 kubelet[2394]: E0620 19:43:05.120834 2394 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:05.120896 kubelet[2394]: I0620 19:43:05.120867 2394 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:05.123232 kubelet[2394]: E0620 19:43:05.123160 2394 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:07.461884 systemd[1]: Reload requested from client PID 2666 ('systemctl') (unit session-9.scope)... Jun 20 19:43:07.461925 systemd[1]: Reloading... Jun 20 19:43:07.604242 zram_generator::config[2710]: No configuration found. Jun 20 19:43:07.717265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:43:07.877112 systemd[1]: Reloading finished in 414 ms. Jun 20 19:43:07.922093 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:43:07.939641 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:43:07.939895 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:43:07.939947 systemd[1]: kubelet.service: Consumed 2.300s CPU time, 132.9M memory peak. Jun 20 19:43:07.943521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:43:08.270558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:43:08.280614 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:43:08.336640 kubelet[2775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:43:08.338197 kubelet[2775]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:43:08.338197 kubelet[2775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:43:08.338197 kubelet[2775]: I0620 19:43:08.337131 2775 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:43:08.346466 kubelet[2775]: I0620 19:43:08.346437 2775 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:43:08.346656 kubelet[2775]: I0620 19:43:08.346638 2775 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:43:08.347045 kubelet[2775]: I0620 19:43:08.347028 2775 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:43:08.348704 kubelet[2775]: I0620 19:43:08.348688 2775 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:43:08.351822 kubelet[2775]: I0620 19:43:08.351799 2775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:43:08.357527 kubelet[2775]: I0620 19:43:08.357511 2775 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:43:08.360981 kubelet[2775]: I0620 19:43:08.360960 2775 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:43:08.361327 kubelet[2775]: I0620 19:43:08.361294 2775 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:43:08.361574 kubelet[2775]: I0620 19:43:08.361392 2775 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-1-0-8-afb8bdccbb.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:43:08.361718 kubelet[2775]: I0620 19:43:08.361706 2775 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:43:08.361783 kubelet[2775]: I0620 19:43:08.361774 2775 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:43:08.361877 kubelet[2775]: I0620 19:43:08.361866 2775 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:43:08.362071 kubelet[2775]: I0620 19:43:08.362057 2775 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:43:08.362146 kubelet[2775]: I0620 19:43:08.362136 2775 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:43:08.362253 kubelet[2775]: I0620 19:43:08.362242 2775 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:43:08.362333 kubelet[2775]: I0620 19:43:08.362322 2775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:43:08.365334 kubelet[2775]: I0620 19:43:08.365280 2775 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:43:08.366070 kubelet[2775]: I0620 19:43:08.366055 2775 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:43:08.375122 kubelet[2775]: I0620 19:43:08.375094 2775 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:43:08.376038 kubelet[2775]: I0620 19:43:08.375991 2775 server.go:1287] "Started kubelet" Jun 20 19:43:08.380137 kubelet[2775]: I0620 19:43:08.380095 2775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:43:08.380495 kubelet[2775]: I0620 19:43:08.380478 2775 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:43:08.387636 kubelet[2775]: I0620 19:43:08.386162 2775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:43:08.399059 kubelet[2775]: I0620 19:43:08.399017 2775 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:43:08.401532 kubelet[2775]: I0620 19:43:08.401511 2775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:43:08.403155 kubelet[2775]: I0620 19:43:08.403140 2775 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:43:08.407098 kubelet[2775]: I0620 19:43:08.407073 2775 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:43:08.407395 kubelet[2775]: I0620 19:43:08.407380 2775 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:43:08.409482 kubelet[2775]: I0620 19:43:08.409444 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:43:08.410648 kubelet[2775]: I0620 19:43:08.410632 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:43:08.410758 kubelet[2775]: I0620 19:43:08.410746 2775 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:43:08.410850 kubelet[2775]: I0620 19:43:08.410839 2775 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:43:08.410917 kubelet[2775]: I0620 19:43:08.410908 2775 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:43:08.411020 kubelet[2775]: E0620 19:43:08.410999 2775 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:43:08.416941 kubelet[2775]: I0620 19:43:08.416910 2775 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:43:08.419768 kubelet[2775]: I0620 19:43:08.419534 2775 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:43:08.419768 kubelet[2775]: I0620 19:43:08.419634 2775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:43:08.422217 kubelet[2775]: E0620 19:43:08.422190 2775 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:43:08.423147 kubelet[2775]: I0620 19:43:08.422221 2775 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:43:08.473107 kubelet[2775]: I0620 19:43:08.472965 2775 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:43:08.473107 kubelet[2775]: I0620 19:43:08.473077 2775 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:43:08.473298 kubelet[2775]: I0620 19:43:08.473155 2775 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:43:08.473439 kubelet[2775]: I0620 19:43:08.473417 2775 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:43:08.473477 kubelet[2775]: I0620 19:43:08.473434 2775 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:43:08.473477 kubelet[2775]: I0620 19:43:08.473456 2775 policy_none.go:49] "None policy: Start" Jun 20 19:43:08.473477 kubelet[2775]: I0620 19:43:08.473466 2775 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:43:08.473563 kubelet[2775]: I0620 19:43:08.473479 2775 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:43:08.473653 kubelet[2775]: I0620 19:43:08.473634 2775 state_mem.go:75] "Updated machine memory state" Jun 20 19:43:08.481849 kubelet[2775]: I0620 19:43:08.481105 2775 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:43:08.481849 kubelet[2775]: I0620 19:43:08.481298 2775 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:43:08.481849 kubelet[2775]: I0620 19:43:08.481309 2775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:43:08.482606 kubelet[2775]: I0620 19:43:08.482533 2775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:43:08.486686 kubelet[2775]: E0620 19:43:08.486607 2775 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:43:08.512856 kubelet[2775]: I0620 19:43:08.512569 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.513339 kubelet[2775]: I0620 19:43:08.513324 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.513773 kubelet[2775]: I0620 19:43:08.513651 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.521333 kubelet[2775]: W0620 19:43:08.521233 2775 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:43:08.524465 kubelet[2775]: W0620 19:43:08.524339 2775 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:43:08.525974 kubelet[2775]: W0620 19:43:08.525945 2775 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:43:08.591944 kubelet[2775]: I0620 19:43:08.590987 2775 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.603868 kubelet[2775]: I0620 19:43:08.603460 2775 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.603868 kubelet[2775]: I0620 19:43:08.603600 2775 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.709833 kubelet[2775]: I0620 19:43:08.709589 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c499a0e4dc72ddd30e16bd682f95ed0b-ca-certs\") pod \"kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"c499a0e4dc72ddd30e16bd682f95ed0b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.710721 kubelet[2775]: I0620 19:43:08.710360 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e5a5a43475fdd6885709ff298547765-ca-certs\") pod \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"5e5a5a43475fdd6885709ff298547765\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.711382 kubelet[2775]: I0620 19:43:08.711254 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e5a5a43475fdd6885709ff298547765-k8s-certs\") pod \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"5e5a5a43475fdd6885709ff298547765\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.711953 kubelet[2775]: I0620 19:43:08.711718 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d0a4eb2989671422410a02ed8a0bade-kubeconfig\") pod \"kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"9d0a4eb2989671422410a02ed8a0bade\") " pod="kube-system/kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.712440 kubelet[2775]: I0620 19:43:08.712268 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c499a0e4dc72ddd30e16bd682f95ed0b-k8s-certs\") pod \"kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"c499a0e4dc72ddd30e16bd682f95ed0b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.712850 kubelet[2775]: I0620 19:43:08.712779 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c499a0e4dc72ddd30e16bd682f95ed0b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"c499a0e4dc72ddd30e16bd682f95ed0b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.713290 kubelet[2775]: I0620 19:43:08.713144 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5e5a5a43475fdd6885709ff298547765-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"5e5a5a43475fdd6885709ff298547765\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.713696 kubelet[2775]: I0620 19:43:08.713470 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e5a5a43475fdd6885709ff298547765-kubeconfig\") pod \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"5e5a5a43475fdd6885709ff298547765\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:08.713696 kubelet[2775]: I0620 19:43:08.713543 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e5a5a43475fdd6885709ff298547765-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal\" (UID: \"5e5a5a43475fdd6885709ff298547765\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:09.376289 kubelet[2775]: I0620 19:43:09.376247 2775 apiserver.go:52] "Watching apiserver" Jun 20 19:43:09.408445 kubelet[2775]: I0620 19:43:09.408390 2775 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:43:09.446395 kubelet[2775]: I0620 19:43:09.446364 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:09.447722 kubelet[2775]: I0620 19:43:09.447651 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:09.457638 kubelet[2775]: W0620 19:43:09.457400 2775 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:43:09.457638 kubelet[2775]: E0620 19:43:09.457509 2775 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:09.458741 kubelet[2775]: W0620 19:43:09.458724 2775 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:43:09.458963 kubelet[2775]: E0620 19:43:09.458805 2775 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:43:09.480006 kubelet[2775]: I0620 19:43:09.479740 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344-1-0-8-afb8bdccbb.novalocal" podStartSLOduration=1.4797182119999999 podStartE2EDuration="1.479718212s" podCreationTimestamp="2025-06-20 19:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:43:09.479442954 +0000 UTC m=+1.195068523" watchObservedRunningTime="2025-06-20 19:43:09.479718212 +0000 UTC m=+1.195343780" Jun 20 19:43:09.508758 kubelet[2775]: I0620 19:43:09.508238 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344-1-0-8-afb8bdccbb.novalocal" podStartSLOduration=1.508216341 podStartE2EDuration="1.508216341s" podCreationTimestamp="2025-06-20 19:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:43:09.507954109 +0000 UTC m=+1.223579707" watchObservedRunningTime="2025-06-20 19:43:09.508216341 +0000 UTC m=+1.223841909" Jun 20 19:43:09.508758 kubelet[2775]: I0620 19:43:09.508335 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344-1-0-8-afb8bdccbb.novalocal" podStartSLOduration=1.508326592 podStartE2EDuration="1.508326592s" podCreationTimestamp="2025-06-20 19:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:43:09.495860724 +0000 UTC m=+1.211486292" watchObservedRunningTime="2025-06-20 19:43:09.508326592 +0000 UTC m=+1.223952170" Jun 20 19:43:13.612688 kubelet[2775]: I0620 19:43:13.611744 2775 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:43:13.613609 containerd[1512]: time="2025-06-20T19:43:13.612549238Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:43:13.614989 kubelet[2775]: I0620 19:43:13.614910 2775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:43:14.281832 systemd[1]: Created slice kubepods-besteffort-pod857cb3e2_a585_413a_965e_70d035319681.slice - libcontainer container kubepods-besteffort-pod857cb3e2_a585_413a_965e_70d035319681.slice. Jun 20 19:43:14.353352 kubelet[2775]: I0620 19:43:14.353284 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/857cb3e2-a585-413a-965e-70d035319681-kube-proxy\") pod \"kube-proxy-cnzmj\" (UID: \"857cb3e2-a585-413a-965e-70d035319681\") " pod="kube-system/kube-proxy-cnzmj" Jun 20 19:43:14.353352 kubelet[2775]: I0620 19:43:14.353336 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/857cb3e2-a585-413a-965e-70d035319681-xtables-lock\") pod \"kube-proxy-cnzmj\" (UID: \"857cb3e2-a585-413a-965e-70d035319681\") " pod="kube-system/kube-proxy-cnzmj" Jun 20 19:43:14.353352 kubelet[2775]: I0620 19:43:14.353359 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/857cb3e2-a585-413a-965e-70d035319681-lib-modules\") pod \"kube-proxy-cnzmj\" (UID: \"857cb3e2-a585-413a-965e-70d035319681\") " pod="kube-system/kube-proxy-cnzmj" Jun 20 19:43:14.355042 kubelet[2775]: I0620 19:43:14.353382 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlsq\" (UniqueName: \"kubernetes.io/projected/857cb3e2-a585-413a-965e-70d035319681-kube-api-access-mqlsq\") pod \"kube-proxy-cnzmj\" (UID: \"857cb3e2-a585-413a-965e-70d035319681\") " pod="kube-system/kube-proxy-cnzmj" Jun 20 19:43:14.594366 containerd[1512]: time="2025-06-20T19:43:14.594127227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cnzmj,Uid:857cb3e2-a585-413a-965e-70d035319681,Namespace:kube-system,Attempt:0,}" Jun 20 19:43:14.655621 kubelet[2775]: I0620 19:43:14.655392 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1bc34cad-22bf-45eb-908d-84ab8cbb73e8-var-lib-calico\") pod \"tigera-operator-68f7c7984d-l9d45\" (UID: \"1bc34cad-22bf-45eb-908d-84ab8cbb73e8\") " pod="tigera-operator/tigera-operator-68f7c7984d-l9d45" Jun 20 19:43:14.655621 kubelet[2775]: I0620 19:43:14.655439 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bh5\" (UniqueName: \"kubernetes.io/projected/1bc34cad-22bf-45eb-908d-84ab8cbb73e8-kube-api-access-k5bh5\") pod \"tigera-operator-68f7c7984d-l9d45\" (UID: \"1bc34cad-22bf-45eb-908d-84ab8cbb73e8\") " pod="tigera-operator/tigera-operator-68f7c7984d-l9d45" Jun 20 19:43:14.663888 systemd[1]: Created slice kubepods-besteffort-pod1bc34cad_22bf_45eb_908d_84ab8cbb73e8.slice - libcontainer container kubepods-besteffort-pod1bc34cad_22bf_45eb_908d_84ab8cbb73e8.slice. Jun 20 19:43:14.669283 containerd[1512]: time="2025-06-20T19:43:14.669219655Z" level=info msg="connecting to shim 6e7c908b3dfd2ff399cf5a29b044b6282113f6e332229ba1ea26ad7e7691c7fd" address="unix:///run/containerd/s/ac1794087855c009962f9c8c641c7a07c89627a14a0df4cf532695e0827f2088" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:43:14.707408 systemd[1]: Started cri-containerd-6e7c908b3dfd2ff399cf5a29b044b6282113f6e332229ba1ea26ad7e7691c7fd.scope - libcontainer container 6e7c908b3dfd2ff399cf5a29b044b6282113f6e332229ba1ea26ad7e7691c7fd. Jun 20 19:43:14.779409 containerd[1512]: time="2025-06-20T19:43:14.779352634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cnzmj,Uid:857cb3e2-a585-413a-965e-70d035319681,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e7c908b3dfd2ff399cf5a29b044b6282113f6e332229ba1ea26ad7e7691c7fd\"" Jun 20 19:43:14.785762 containerd[1512]: time="2025-06-20T19:43:14.785626705Z" level=info msg="CreateContainer within sandbox \"6e7c908b3dfd2ff399cf5a29b044b6282113f6e332229ba1ea26ad7e7691c7fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:43:14.805019 containerd[1512]: time="2025-06-20T19:43:14.804974761Z" level=info msg="Container 081bcc6630ff36618ecf3041c5a63ea0f91d2e2d73e24d4bd7c9a28047e8a126: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:43:14.821916 containerd[1512]: time="2025-06-20T19:43:14.821824874Z" level=info msg="CreateContainer within sandbox \"6e7c908b3dfd2ff399cf5a29b044b6282113f6e332229ba1ea26ad7e7691c7fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"081bcc6630ff36618ecf3041c5a63ea0f91d2e2d73e24d4bd7c9a28047e8a126\"" Jun 20 19:43:14.823786 containerd[1512]: time="2025-06-20T19:43:14.823727161Z" level=info msg="StartContainer for \"081bcc6630ff36618ecf3041c5a63ea0f91d2e2d73e24d4bd7c9a28047e8a126\"" Jun 20 19:43:14.832459 containerd[1512]: time="2025-06-20T19:43:14.832245469Z" level=info msg="connecting to shim 081bcc6630ff36618ecf3041c5a63ea0f91d2e2d73e24d4bd7c9a28047e8a126" address="unix:///run/containerd/s/ac1794087855c009962f9c8c641c7a07c89627a14a0df4cf532695e0827f2088" protocol=ttrpc version=3 Jun 20 19:43:14.855358 systemd[1]: Started cri-containerd-081bcc6630ff36618ecf3041c5a63ea0f91d2e2d73e24d4bd7c9a28047e8a126.scope - libcontainer container 081bcc6630ff36618ecf3041c5a63ea0f91d2e2d73e24d4bd7c9a28047e8a126. Jun 20 19:43:14.921798 containerd[1512]: time="2025-06-20T19:43:14.921732399Z" level=info msg="StartContainer for \"081bcc6630ff36618ecf3041c5a63ea0f91d2e2d73e24d4bd7c9a28047e8a126\" returns successfully" Jun 20 19:43:14.970770 containerd[1512]: time="2025-06-20T19:43:14.970643444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-l9d45,Uid:1bc34cad-22bf-45eb-908d-84ab8cbb73e8,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:43:15.027786 containerd[1512]: time="2025-06-20T19:43:15.027672756Z" level=info msg="connecting to shim b91e1de483cc0839b75bf7c096c1e058e72320ddafe00340716c2ae3934b3d72" address="unix:///run/containerd/s/c152262ec405b375ed2025eb9a1db2e766d36a6cc6f3bc339fef67ec4f6b8c25" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:43:15.076372 systemd[1]: Started cri-containerd-b91e1de483cc0839b75bf7c096c1e058e72320ddafe00340716c2ae3934b3d72.scope - libcontainer container b91e1de483cc0839b75bf7c096c1e058e72320ddafe00340716c2ae3934b3d72. Jun 20 19:43:15.172642 containerd[1512]: time="2025-06-20T19:43:15.172472325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-l9d45,Uid:1bc34cad-22bf-45eb-908d-84ab8cbb73e8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b91e1de483cc0839b75bf7c096c1e058e72320ddafe00340716c2ae3934b3d72\"" Jun 20 19:43:15.177410 containerd[1512]: time="2025-06-20T19:43:15.177348405Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:43:15.497985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547890125.mount: Deactivated successfully. Jun 20 19:43:15.509726 kubelet[2775]: I0620 19:43:15.509252 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cnzmj" podStartSLOduration=1.5088049749999999 podStartE2EDuration="1.508804975s" podCreationTimestamp="2025-06-20 19:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:43:15.505917614 +0000 UTC m=+7.221543182" watchObservedRunningTime="2025-06-20 19:43:15.508804975 +0000 UTC m=+7.224430553" Jun 20 19:43:16.561961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2356018284.mount: Deactivated successfully. Jun 20 19:43:17.753593 containerd[1512]: time="2025-06-20T19:43:17.753520383Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:17.755412 containerd[1512]: time="2025-06-20T19:43:17.755337010Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 20 19:43:17.759207 containerd[1512]: time="2025-06-20T19:43:17.758358301Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:17.761761 containerd[1512]: time="2025-06-20T19:43:17.761714058Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:17.762881 containerd[1512]: time="2025-06-20T19:43:17.762826355Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 2.585292427s" Jun 20 19:43:17.762881 containerd[1512]: time="2025-06-20T19:43:17.762867704Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 20 19:43:17.767150 containerd[1512]: time="2025-06-20T19:43:17.767113475Z" level=info msg="CreateContainer within sandbox \"b91e1de483cc0839b75bf7c096c1e058e72320ddafe00340716c2ae3934b3d72\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:43:17.785001 containerd[1512]: time="2025-06-20T19:43:17.784958790Z" level=info msg="Container 494f38da087b6e6dbc5bd5d91f98e1b487ca96fcc5f103dcd36eeed68eaad734: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:43:17.790856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032968149.mount: Deactivated successfully. Jun 20 19:43:17.803840 containerd[1512]: time="2025-06-20T19:43:17.803787137Z" level=info msg="CreateContainer within sandbox \"b91e1de483cc0839b75bf7c096c1e058e72320ddafe00340716c2ae3934b3d72\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"494f38da087b6e6dbc5bd5d91f98e1b487ca96fcc5f103dcd36eeed68eaad734\"" Jun 20 19:43:17.804776 containerd[1512]: time="2025-06-20T19:43:17.804517677Z" level=info msg="StartContainer for \"494f38da087b6e6dbc5bd5d91f98e1b487ca96fcc5f103dcd36eeed68eaad734\"" Jun 20 19:43:17.806241 containerd[1512]: time="2025-06-20T19:43:17.806169050Z" level=info msg="connecting to shim 494f38da087b6e6dbc5bd5d91f98e1b487ca96fcc5f103dcd36eeed68eaad734" address="unix:///run/containerd/s/c152262ec405b375ed2025eb9a1db2e766d36a6cc6f3bc339fef67ec4f6b8c25" protocol=ttrpc version=3 Jun 20 19:43:17.849375 systemd[1]: Started cri-containerd-494f38da087b6e6dbc5bd5d91f98e1b487ca96fcc5f103dcd36eeed68eaad734.scope - libcontainer container 494f38da087b6e6dbc5bd5d91f98e1b487ca96fcc5f103dcd36eeed68eaad734. Jun 20 19:43:17.912196 containerd[1512]: time="2025-06-20T19:43:17.912114950Z" level=info msg="StartContainer for \"494f38da087b6e6dbc5bd5d91f98e1b487ca96fcc5f103dcd36eeed68eaad734\" returns successfully" Jun 20 19:43:19.639132 kubelet[2775]: I0620 19:43:19.638674 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-l9d45" podStartSLOduration=3.049352674 podStartE2EDuration="5.638636772s" podCreationTimestamp="2025-06-20 19:43:14 +0000 UTC" firstStartedPulling="2025-06-20 19:43:15.175444828 +0000 UTC m=+6.891070396" lastFinishedPulling="2025-06-20 19:43:17.764728926 +0000 UTC m=+9.480354494" observedRunningTime="2025-06-20 19:43:18.566753618 +0000 UTC m=+10.282379217" watchObservedRunningTime="2025-06-20 19:43:19.638636772 +0000 UTC m=+11.354262340" Jun 20 19:43:25.129718 sudo[1803]: pam_unix(sudo:session): session closed for user root Jun 20 19:43:25.383577 sshd[1802]: Connection closed by 172.24.4.1 port 33536 Jun 20 19:43:25.384760 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jun 20 19:43:25.393156 systemd[1]: sshd@6-172.24.4.229:22-172.24.4.1:33536.service: Deactivated successfully. Jun 20 19:43:25.402531 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:43:25.402931 systemd[1]: session-9.scope: Consumed 7.775s CPU time, 228.3M memory peak. Jun 20 19:43:25.408244 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:43:25.412567 systemd-logind[1497]: Removed session 9. Jun 20 19:43:29.947257 kubelet[2775]: W0620 19:43:29.946634 2775 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4344-1-0-8-afb8bdccbb.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object Jun 20 19:43:29.951692 kubelet[2775]: I0620 19:43:29.948804 2775 status_manager.go:890] "Failed to get status for pod" podUID="959764a4-957a-4038-a62e-d2d76b1364a1" pod="calico-system/calico-typha-54689bbd4-4lpq8" err="pods \"calico-typha-54689bbd4-4lpq8\" is forbidden: User \"system:node:ci-4344-1-0-8-afb8bdccbb.novalocal\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object" Jun 20 19:43:29.951692 kubelet[2775]: E0620 19:43:29.949033 2775 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4344-1-0-8-afb8bdccbb.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object" logger="UnhandledError" Jun 20 19:43:29.951692 kubelet[2775]: W0620 19:43:29.947467 2775 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4344-1-0-8-afb8bdccbb.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object Jun 20 19:43:29.951692 kubelet[2775]: E0620 19:43:29.949118 2775 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4344-1-0-8-afb8bdccbb.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object" logger="UnhandledError" Jun 20 19:43:29.953924 systemd[1]: Created slice kubepods-besteffort-pod959764a4_957a_4038_a62e_d2d76b1364a1.slice - libcontainer container kubepods-besteffort-pod959764a4_957a_4038_a62e_d2d76b1364a1.slice. Jun 20 19:43:29.966289 kubelet[2775]: I0620 19:43:29.966230 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/959764a4-957a-4038-a62e-d2d76b1364a1-typha-certs\") pod \"calico-typha-54689bbd4-4lpq8\" (UID: \"959764a4-957a-4038-a62e-d2d76b1364a1\") " pod="calico-system/calico-typha-54689bbd4-4lpq8" Jun 20 19:43:29.966503 kubelet[2775]: I0620 19:43:29.966328 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns4sl\" (UniqueName: \"kubernetes.io/projected/959764a4-957a-4038-a62e-d2d76b1364a1-kube-api-access-ns4sl\") pod \"calico-typha-54689bbd4-4lpq8\" (UID: \"959764a4-957a-4038-a62e-d2d76b1364a1\") " pod="calico-system/calico-typha-54689bbd4-4lpq8" Jun 20 19:43:29.966503 kubelet[2775]: I0620 19:43:29.966363 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/959764a4-957a-4038-a62e-d2d76b1364a1-tigera-ca-bundle\") pod \"calico-typha-54689bbd4-4lpq8\" (UID: \"959764a4-957a-4038-a62e-d2d76b1364a1\") " pod="calico-system/calico-typha-54689bbd4-4lpq8" Jun 20 19:43:30.263591 kubelet[2775]: W0620 19:43:30.263493 2775 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4344-1-0-8-afb8bdccbb.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object Jun 20 19:43:30.263591 kubelet[2775]: E0620 19:43:30.263539 2775 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:ci-4344-1-0-8-afb8bdccbb.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object" logger="UnhandledError" Jun 20 19:43:30.263591 kubelet[2775]: W0620 19:43:30.263596 2775 reflector.go:569] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4344-1-0-8-afb8bdccbb.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object Jun 20 19:43:30.263840 kubelet[2775]: E0620 19:43:30.263618 2775 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"cni-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-config\" is forbidden: User \"system:node:ci-4344-1-0-8-afb8bdccbb.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object" logger="UnhandledError" Jun 20 19:43:30.270827 systemd[1]: Created slice kubepods-besteffort-pod34405592_567f_4db0_b081_a1a1696b60db.slice - libcontainer container kubepods-besteffort-pod34405592_567f_4db0_b081_a1a1696b60db.slice. Jun 20 19:43:30.369223 kubelet[2775]: I0620 19:43:30.369161 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/34405592-567f-4db0-b081-a1a1696b60db-flexvol-driver-host\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369223 kubelet[2775]: I0620 19:43:30.369225 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnpp8\" (UniqueName: \"kubernetes.io/projected/34405592-567f-4db0-b081-a1a1696b60db-kube-api-access-dnpp8\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369439 kubelet[2775]: I0620 19:43:30.369250 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/34405592-567f-4db0-b081-a1a1696b60db-node-certs\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369439 kubelet[2775]: I0620 19:43:30.369270 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34405592-567f-4db0-b081-a1a1696b60db-xtables-lock\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369439 kubelet[2775]: I0620 19:43:30.369289 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/34405592-567f-4db0-b081-a1a1696b60db-cni-bin-dir\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369439 kubelet[2775]: I0620 19:43:30.369324 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/34405592-567f-4db0-b081-a1a1696b60db-var-run-calico\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369439 kubelet[2775]: I0620 19:43:30.369344 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/34405592-567f-4db0-b081-a1a1696b60db-cni-log-dir\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369599 kubelet[2775]: I0620 19:43:30.369382 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/34405592-567f-4db0-b081-a1a1696b60db-policysync\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369599 kubelet[2775]: I0620 19:43:30.369403 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34405592-567f-4db0-b081-a1a1696b60db-tigera-ca-bundle\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369599 kubelet[2775]: I0620 19:43:30.369421 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/34405592-567f-4db0-b081-a1a1696b60db-cni-net-dir\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369599 kubelet[2775]: I0620 19:43:30.369481 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34405592-567f-4db0-b081-a1a1696b60db-lib-modules\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.369599 kubelet[2775]: I0620 19:43:30.369500 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34405592-567f-4db0-b081-a1a1696b60db-var-lib-calico\") pod \"calico-node-hq9mj\" (UID: \"34405592-567f-4db0-b081-a1a1696b60db\") " pod="calico-system/calico-node-hq9mj" Jun 20 19:43:30.509112 kubelet[2775]: E0620 19:43:30.508954 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.509112 kubelet[2775]: W0620 19:43:30.508988 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.509112 kubelet[2775]: E0620 19:43:30.509063 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.514627 kubelet[2775]: E0620 19:43:30.514352 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb48f" podUID="39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd" Jun 20 19:43:30.551235 kubelet[2775]: E0620 19:43:30.550515 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.551700 kubelet[2775]: W0620 19:43:30.551488 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.551700 kubelet[2775]: E0620 19:43:30.551539 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.552035 kubelet[2775]: E0620 19:43:30.552015 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.552389 kubelet[2775]: W0620 19:43:30.552155 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.552389 kubelet[2775]: E0620 19:43:30.552220 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.553084 kubelet[2775]: E0620 19:43:30.552662 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.553588 kubelet[2775]: W0620 19:43:30.553238 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.553588 kubelet[2775]: E0620 19:43:30.553271 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.553873 kubelet[2775]: E0620 19:43:30.553848 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.554681 kubelet[2775]: W0620 19:43:30.554656 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.554828 kubelet[2775]: E0620 19:43:30.554796 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.555422 kubelet[2775]: E0620 19:43:30.555256 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.555422 kubelet[2775]: W0620 19:43:30.555280 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.555422 kubelet[2775]: E0620 19:43:30.555305 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.556254 kubelet[2775]: E0620 19:43:30.556129 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.556254 kubelet[2775]: W0620 19:43:30.556155 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.557593 kubelet[2775]: E0620 19:43:30.557373 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.557946 kubelet[2775]: E0620 19:43:30.557770 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.557946 kubelet[2775]: W0620 19:43:30.557797 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.557946 kubelet[2775]: E0620 19:43:30.557811 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.558955 kubelet[2775]: E0620 19:43:30.558789 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.558955 kubelet[2775]: W0620 19:43:30.558813 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.558955 kubelet[2775]: E0620 19:43:30.558830 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.559389 kubelet[2775]: E0620 19:43:30.559312 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.560298 kubelet[2775]: W0620 19:43:30.559504 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.560298 kubelet[2775]: E0620 19:43:30.559529 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.560550 kubelet[2775]: E0620 19:43:30.560523 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.560840 kubelet[2775]: W0620 19:43:30.560671 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.560840 kubelet[2775]: E0620 19:43:30.560704 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.561092 kubelet[2775]: E0620 19:43:30.561067 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.561263 kubelet[2775]: W0620 19:43:30.561237 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.562314 kubelet[2775]: E0620 19:43:30.562291 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.562777 kubelet[2775]: E0620 19:43:30.562691 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.562777 kubelet[2775]: W0620 19:43:30.562717 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.562777 kubelet[2775]: E0620 19:43:30.562736 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.563602 kubelet[2775]: E0620 19:43:30.563436 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.563602 kubelet[2775]: W0620 19:43:30.563456 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.563602 kubelet[2775]: E0620 19:43:30.563479 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.565225 kubelet[2775]: E0620 19:43:30.563910 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.565529 kubelet[2775]: W0620 19:43:30.565358 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.565529 kubelet[2775]: E0620 19:43:30.565392 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.565944 kubelet[2775]: E0620 19:43:30.565786 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.565944 kubelet[2775]: W0620 19:43:30.565810 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.565944 kubelet[2775]: E0620 19:43:30.565827 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.566719 kubelet[2775]: E0620 19:43:30.566650 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.566719 kubelet[2775]: W0620 19:43:30.566672 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.566719 kubelet[2775]: E0620 19:43:30.566688 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.567406 kubelet[2775]: E0620 19:43:30.567346 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.567406 kubelet[2775]: W0620 19:43:30.567358 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.567406 kubelet[2775]: E0620 19:43:30.567369 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.567715 kubelet[2775]: E0620 19:43:30.567702 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.567912 kubelet[2775]: W0620 19:43:30.567800 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.567912 kubelet[2775]: E0620 19:43:30.567815 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.568811 kubelet[2775]: E0620 19:43:30.568749 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.568984 kubelet[2775]: W0620 19:43:30.568908 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.568984 kubelet[2775]: E0620 19:43:30.568926 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.569299 kubelet[2775]: E0620 19:43:30.569286 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.569543 kubelet[2775]: W0620 19:43:30.569469 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.569543 kubelet[2775]: E0620 19:43:30.569488 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.571768 kubelet[2775]: E0620 19:43:30.571741 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.571768 kubelet[2775]: W0620 19:43:30.571765 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.572064 kubelet[2775]: E0620 19:43:30.571786 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.572064 kubelet[2775]: I0620 19:43:30.571817 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd-kubelet-dir\") pod \"csi-node-driver-vb48f\" (UID: \"39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd\") " pod="calico-system/csi-node-driver-vb48f" Jun 20 19:43:30.573096 kubelet[2775]: E0620 19:43:30.572242 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.573096 kubelet[2775]: W0620 19:43:30.572259 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.573096 kubelet[2775]: E0620 19:43:30.572271 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.573096 kubelet[2775]: I0620 19:43:30.572294 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd-registration-dir\") pod \"csi-node-driver-vb48f\" (UID: \"39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd\") " pod="calico-system/csi-node-driver-vb48f" Jun 20 19:43:30.573096 kubelet[2775]: E0620 19:43:30.572843 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.573096 kubelet[2775]: W0620 19:43:30.572855 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.573096 kubelet[2775]: E0620 19:43:30.572866 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.573096 kubelet[2775]: I0620 19:43:30.572882 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd-socket-dir\") pod \"csi-node-driver-vb48f\" (UID: \"39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd\") " pod="calico-system/csi-node-driver-vb48f" Jun 20 19:43:30.573096 kubelet[2775]: E0620 19:43:30.573018 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.573693 kubelet[2775]: W0620 19:43:30.573028 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.573693 kubelet[2775]: E0620 19:43:30.573037 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.573693 kubelet[2775]: I0620 19:43:30.573055 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8njm7\" (UniqueName: \"kubernetes.io/projected/39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd-kube-api-access-8njm7\") pod \"csi-node-driver-vb48f\" (UID: \"39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd\") " pod="calico-system/csi-node-driver-vb48f" Jun 20 19:43:30.573693 kubelet[2775]: E0620 19:43:30.573359 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.573693 kubelet[2775]: W0620 19:43:30.573371 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.573693 kubelet[2775]: E0620 19:43:30.573390 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.573693 kubelet[2775]: I0620 19:43:30.573409 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd-varrun\") pod \"csi-node-driver-vb48f\" (UID: \"39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd\") " pod="calico-system/csi-node-driver-vb48f" Jun 20 19:43:30.574021 kubelet[2775]: E0620 19:43:30.573737 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.574021 kubelet[2775]: W0620 19:43:30.573749 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.574021 kubelet[2775]: E0620 19:43:30.573765 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.574365 kubelet[2775]: E0620 19:43:30.574344 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.574449 kubelet[2775]: W0620 19:43:30.574433 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.575214 kubelet[2775]: E0620 19:43:30.574542 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.575496 kubelet[2775]: E0620 19:43:30.575481 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.575721 kubelet[2775]: W0620 19:43:30.575569 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.575721 kubelet[2775]: E0620 19:43:30.575592 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.575889 kubelet[2775]: E0620 19:43:30.575876 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.576015 kubelet[2775]: W0620 19:43:30.575981 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.576117 kubelet[2775]: E0620 19:43:30.576103 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.576354 kubelet[2775]: E0620 19:43:30.576333 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.576354 kubelet[2775]: W0620 19:43:30.576349 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.576354 kubelet[2775]: E0620 19:43:30.576368 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.578312 kubelet[2775]: E0620 19:43:30.578290 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.578312 kubelet[2775]: W0620 19:43:30.578310 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.578517 kubelet[2775]: E0620 19:43:30.578322 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.578517 kubelet[2775]: E0620 19:43:30.578496 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.578517 kubelet[2775]: W0620 19:43:30.578508 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.578517 kubelet[2775]: E0620 19:43:30.578517 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.578796 kubelet[2775]: E0620 19:43:30.578673 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.578796 kubelet[2775]: W0620 19:43:30.578684 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.578796 kubelet[2775]: E0620 19:43:30.578693 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.578948 kubelet[2775]: E0620 19:43:30.578824 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.578948 kubelet[2775]: W0620 19:43:30.578834 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.578948 kubelet[2775]: E0620 19:43:30.578842 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.579153 kubelet[2775]: E0620 19:43:30.578968 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.579153 kubelet[2775]: W0620 19:43:30.578978 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.579153 kubelet[2775]: E0620 19:43:30.578986 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.674509 kubelet[2775]: E0620 19:43:30.674414 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.674509 kubelet[2775]: W0620 19:43:30.674445 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.674509 kubelet[2775]: E0620 19:43:30.674473 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.675057 kubelet[2775]: E0620 19:43:30.674989 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.675235 kubelet[2775]: W0620 19:43:30.675001 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.675235 kubelet[2775]: E0620 19:43:30.675190 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.675485 kubelet[2775]: E0620 19:43:30.675453 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.675485 kubelet[2775]: W0620 19:43:30.675482 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.675740 kubelet[2775]: E0620 19:43:30.675514 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.675740 kubelet[2775]: E0620 19:43:30.675721 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.675740 kubelet[2775]: W0620 19:43:30.675733 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.675867 kubelet[2775]: E0620 19:43:30.675744 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.676191 kubelet[2775]: E0620 19:43:30.675907 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.676191 kubelet[2775]: W0620 19:43:30.675922 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.676191 kubelet[2775]: E0620 19:43:30.675932 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.676412 kubelet[2775]: E0620 19:43:30.676397 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.676750 kubelet[2775]: W0620 19:43:30.676733 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.676922 kubelet[2775]: E0620 19:43:30.676863 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.677336 kubelet[2775]: E0620 19:43:30.677243 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.677336 kubelet[2775]: W0620 19:43:30.677257 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.677336 kubelet[2775]: E0620 19:43:30.677301 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.677805 kubelet[2775]: E0620 19:43:30.677638 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.677805 kubelet[2775]: W0620 19:43:30.677652 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.677805 kubelet[2775]: E0620 19:43:30.677689 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.678364 kubelet[2775]: E0620 19:43:30.678349 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.678440 kubelet[2775]: W0620 19:43:30.678427 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.678598 kubelet[2775]: E0620 19:43:30.678519 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.679094 kubelet[2775]: E0620 19:43:30.678715 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.679222 kubelet[2775]: W0620 19:43:30.679199 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.679405 kubelet[2775]: E0620 19:43:30.679374 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.679620 kubelet[2775]: E0620 19:43:30.679607 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.680210 kubelet[2775]: W0620 19:43:30.679698 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.680210 kubelet[2775]: E0620 19:43:30.679742 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.680494 kubelet[2775]: E0620 19:43:30.680431 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.680494 kubelet[2775]: W0620 19:43:30.680444 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.680620 kubelet[2775]: E0620 19:43:30.680501 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.680899 kubelet[2775]: E0620 19:43:30.680838 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.680899 kubelet[2775]: W0620 19:43:30.680852 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.680899 kubelet[2775]: E0620 19:43:30.680889 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.681573 kubelet[2775]: E0620 19:43:30.681559 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.681705 kubelet[2775]: W0620 19:43:30.681651 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.681705 kubelet[2775]: E0620 19:43:30.681685 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.682053 kubelet[2775]: E0620 19:43:30.682029 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.682053 kubelet[2775]: W0620 19:43:30.682049 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.682265 kubelet[2775]: E0620 19:43:30.682112 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.682389 kubelet[2775]: E0620 19:43:30.682370 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.682389 kubelet[2775]: W0620 19:43:30.682384 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.682522 kubelet[2775]: E0620 19:43:30.682443 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.682647 kubelet[2775]: E0620 19:43:30.682621 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.682647 kubelet[2775]: W0620 19:43:30.682637 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.682647 kubelet[2775]: E0620 19:43:30.682667 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.683301 kubelet[2775]: E0620 19:43:30.683280 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.683301 kubelet[2775]: W0620 19:43:30.683296 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.683408 kubelet[2775]: E0620 19:43:30.683317 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.683593 kubelet[2775]: E0620 19:43:30.683574 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.683593 kubelet[2775]: W0620 19:43:30.683585 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.683782 kubelet[2775]: E0620 19:43:30.683619 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.683782 kubelet[2775]: E0620 19:43:30.683721 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.683782 kubelet[2775]: W0620 19:43:30.683731 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.683973 kubelet[2775]: E0620 19:43:30.683907 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.683973 kubelet[2775]: W0620 19:43:30.683927 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.684194 kubelet[2775]: E0620 19:43:30.683767 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.684194 kubelet[2775]: E0620 19:43:30.684083 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.684194 kubelet[2775]: E0620 19:43:30.684331 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.684194 kubelet[2775]: W0620 19:43:30.684343 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.684194 kubelet[2775]: E0620 19:43:30.684355 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.684194 kubelet[2775]: E0620 19:43:30.684588 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.684194 kubelet[2775]: W0620 19:43:30.684600 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.684194 kubelet[2775]: E0620 19:43:30.684611 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.684194 kubelet[2775]: E0620 19:43:30.684952 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.684194 kubelet[2775]: W0620 19:43:30.684964 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.685541 kubelet[2775]: E0620 19:43:30.684975 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.685893 kubelet[2775]: E0620 19:43:30.685830 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.685893 kubelet[2775]: W0620 19:43:30.685853 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.685893 kubelet[2775]: E0620 19:43:30.685866 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:30.703207 kubelet[2775]: E0620 19:43:30.702725 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:30.703207 kubelet[2775]: W0620 19:43:30.702867 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:30.703207 kubelet[2775]: E0620 19:43:30.702899 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.067218 kubelet[2775]: E0620 19:43:31.067127 2775 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jun 20 19:43:31.067678 kubelet[2775]: E0620 19:43:31.067418 2775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/959764a4-957a-4038-a62e-d2d76b1364a1-tigera-ca-bundle podName:959764a4-957a-4038-a62e-d2d76b1364a1 nodeName:}" failed. No retries permitted until 2025-06-20 19:43:31.567333074 +0000 UTC m=+23.282958692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/959764a4-957a-4038-a62e-d2d76b1364a1-tigera-ca-bundle") pod "calico-typha-54689bbd4-4lpq8" (UID: "959764a4-957a-4038-a62e-d2d76b1364a1") : failed to sync configmap cache: timed out waiting for the condition Jun 20 19:43:31.091564 kubelet[2775]: E0620 19:43:31.091510 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.091815 kubelet[2775]: W0620 19:43:31.091755 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.092025 kubelet[2775]: E0620 19:43:31.091910 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.094202 kubelet[2775]: E0620 19:43:31.093238 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.094202 kubelet[2775]: W0620 19:43:31.093254 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.094202 kubelet[2775]: E0620 19:43:31.093273 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.194865 kubelet[2775]: E0620 19:43:31.194818 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.195145 kubelet[2775]: W0620 19:43:31.195101 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.195430 kubelet[2775]: E0620 19:43:31.195394 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.296859 kubelet[2775]: E0620 19:43:31.296813 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.297163 kubelet[2775]: W0620 19:43:31.297131 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.297368 kubelet[2775]: E0620 19:43:31.297338 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.399691 kubelet[2775]: E0620 19:43:31.399450 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.401224 kubelet[2775]: W0620 19:43:31.400276 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.401224 kubelet[2775]: E0620 19:43:31.400346 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.472602 kubelet[2775]: E0620 19:43:31.471718 2775 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jun 20 19:43:31.472602 kubelet[2775]: E0620 19:43:31.471741 2775 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jun 20 19:43:31.472602 kubelet[2775]: E0620 19:43:31.471893 2775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34405592-567f-4db0-b081-a1a1696b60db-tigera-ca-bundle podName:34405592-567f-4db0-b081-a1a1696b60db nodeName:}" failed. No retries permitted until 2025-06-20 19:43:31.971850473 +0000 UTC m=+23.687476091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/34405592-567f-4db0-b081-a1a1696b60db-tigera-ca-bundle") pod "calico-node-hq9mj" (UID: "34405592-567f-4db0-b081-a1a1696b60db") : failed to sync configmap cache: timed out waiting for the condition Jun 20 19:43:31.472602 kubelet[2775]: E0620 19:43:31.472287 2775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34405592-567f-4db0-b081-a1a1696b60db-node-certs podName:34405592-567f-4db0-b081-a1a1696b60db nodeName:}" failed. No retries permitted until 2025-06-20 19:43:31.972231204 +0000 UTC m=+23.687856832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/34405592-567f-4db0-b081-a1a1696b60db-node-certs") pod "calico-node-hq9mj" (UID: "34405592-567f-4db0-b081-a1a1696b60db") : failed to sync secret cache: timed out waiting for the condition Jun 20 19:43:31.503287 kubelet[2775]: E0620 19:43:31.503150 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.503510 kubelet[2775]: W0620 19:43:31.503376 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.503510 kubelet[2775]: E0620 19:43:31.503426 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.504554 kubelet[2775]: E0620 19:43:31.504509 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.504554 kubelet[2775]: W0620 19:43:31.504549 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.504690 kubelet[2775]: E0620 19:43:31.504577 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.505543 kubelet[2775]: E0620 19:43:31.505483 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.505543 kubelet[2775]: W0620 19:43:31.505520 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.505543 kubelet[2775]: E0620 19:43:31.505545 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.606402 kubelet[2775]: E0620 19:43:31.606219 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.606402 kubelet[2775]: W0620 19:43:31.606248 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.606402 kubelet[2775]: E0620 19:43:31.606270 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.606738 kubelet[2775]: E0620 19:43:31.606690 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.606738 kubelet[2775]: W0620 19:43:31.606704 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.606738 kubelet[2775]: E0620 19:43:31.606716 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.607203 kubelet[2775]: E0620 19:43:31.607148 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.607203 kubelet[2775]: W0620 19:43:31.607161 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.607339 kubelet[2775]: E0620 19:43:31.607324 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.607629 kubelet[2775]: E0620 19:43:31.607614 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.607727 kubelet[2775]: W0620 19:43:31.607713 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.607818 kubelet[2775]: E0620 19:43:31.607805 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.608228 kubelet[2775]: E0620 19:43:31.608061 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.608228 kubelet[2775]: W0620 19:43:31.608075 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.608228 kubelet[2775]: E0620 19:43:31.608087 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.608425 kubelet[2775]: E0620 19:43:31.608411 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.608692 kubelet[2775]: W0620 19:43:31.608506 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.608692 kubelet[2775]: E0620 19:43:31.608527 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.608848 kubelet[2775]: E0620 19:43:31.608834 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.608920 kubelet[2775]: W0620 19:43:31.608908 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.609798 kubelet[2775]: E0620 19:43:31.608981 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.610069 kubelet[2775]: E0620 19:43:31.610054 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.610145 kubelet[2775]: W0620 19:43:31.610132 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.610263 kubelet[2775]: E0620 19:43:31.610247 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.709682 kubelet[2775]: E0620 19:43:31.709518 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.711621 kubelet[2775]: W0620 19:43:31.711587 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.711863 kubelet[2775]: E0620 19:43:31.711835 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.712634 kubelet[2775]: E0620 19:43:31.712610 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.712868 kubelet[2775]: W0620 19:43:31.712703 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.712868 kubelet[2775]: E0620 19:43:31.712728 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.765451 containerd[1512]: time="2025-06-20T19:43:31.765083970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54689bbd4-4lpq8,Uid:959764a4-957a-4038-a62e-d2d76b1364a1,Namespace:calico-system,Attempt:0,}" Jun 20 19:43:31.816701 kubelet[2775]: E0620 19:43:31.816525 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.816701 kubelet[2775]: W0620 19:43:31.816563 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.816701 kubelet[2775]: E0620 19:43:31.816629 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.822045 kubelet[2775]: E0620 19:43:31.821281 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.822045 kubelet[2775]: W0620 19:43:31.821419 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.822045 kubelet[2775]: E0620 19:43:31.821457 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.841506 containerd[1512]: time="2025-06-20T19:43:31.841345483Z" level=info msg="connecting to shim 7da221c60eb0698deae7fffd2d78dfac08a8c2dc335594e85b0dab83c9beb69d" address="unix:///run/containerd/s/dd33bee27b04e8cce3122ad53bdde73ce84f197fdf81c5f362bafd5cc05a390a" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:43:31.874384 systemd[1]: Started cri-containerd-7da221c60eb0698deae7fffd2d78dfac08a8c2dc335594e85b0dab83c9beb69d.scope - libcontainer container 7da221c60eb0698deae7fffd2d78dfac08a8c2dc335594e85b0dab83c9beb69d. Jun 20 19:43:31.924024 kubelet[2775]: E0620 19:43:31.923875 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.924024 kubelet[2775]: W0620 19:43:31.923901 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.924024 kubelet[2775]: E0620 19:43:31.923949 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.925752 kubelet[2775]: E0620 19:43:31.925715 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:31.925752 kubelet[2775]: W0620 19:43:31.925739 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:31.925872 kubelet[2775]: E0620 19:43:31.925762 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:31.944402 containerd[1512]: time="2025-06-20T19:43:31.944353395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54689bbd4-4lpq8,Uid:959764a4-957a-4038-a62e-d2d76b1364a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7da221c60eb0698deae7fffd2d78dfac08a8c2dc335594e85b0dab83c9beb69d\"" Jun 20 19:43:31.947807 containerd[1512]: time="2025-06-20T19:43:31.947601129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:43:32.026718 kubelet[2775]: E0620 19:43:32.026670 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.026939 kubelet[2775]: W0620 19:43:32.026811 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.026939 kubelet[2775]: E0620 19:43:32.026835 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.027478 kubelet[2775]: E0620 19:43:32.027448 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.027636 kubelet[2775]: W0620 19:43:32.027476 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.027636 kubelet[2775]: E0620 19:43:32.027505 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.028021 kubelet[2775]: E0620 19:43:32.027908 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.028021 kubelet[2775]: W0620 19:43:32.027921 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.028021 kubelet[2775]: E0620 19:43:32.027942 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.028426 kubelet[2775]: E0620 19:43:32.028404 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.028426 kubelet[2775]: W0620 19:43:32.028420 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.028426 kubelet[2775]: E0620 19:43:32.028437 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.028851 kubelet[2775]: E0620 19:43:32.028820 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.028851 kubelet[2775]: W0620 19:43:32.028835 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.029306 kubelet[2775]: E0620 19:43:32.028886 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.029306 kubelet[2775]: E0620 19:43:32.028980 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.029306 kubelet[2775]: W0620 19:43:32.028991 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.029306 kubelet[2775]: E0620 19:43:32.029002 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.029306 kubelet[2775]: E0620 19:43:32.029204 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.029306 kubelet[2775]: W0620 19:43:32.029215 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.029306 kubelet[2775]: E0620 19:43:32.029225 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.029559 kubelet[2775]: E0620 19:43:32.029353 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.029559 kubelet[2775]: W0620 19:43:32.029365 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.029559 kubelet[2775]: E0620 19:43:32.029375 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.029559 kubelet[2775]: E0620 19:43:32.029498 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.029559 kubelet[2775]: W0620 19:43:32.029514 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.029559 kubelet[2775]: E0620 19:43:32.029523 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.029744 kubelet[2775]: E0620 19:43:32.029700 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.029744 kubelet[2775]: W0620 19:43:32.029712 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.029744 kubelet[2775]: E0620 19:43:32.029722 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.031632 kubelet[2775]: E0620 19:43:32.031236 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.031632 kubelet[2775]: W0620 19:43:32.031254 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.031632 kubelet[2775]: E0620 19:43:32.031287 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.038224 kubelet[2775]: E0620 19:43:32.037359 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:32.038224 kubelet[2775]: W0620 19:43:32.037396 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:32.038224 kubelet[2775]: E0620 19:43:32.037412 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:32.077663 containerd[1512]: time="2025-06-20T19:43:32.076971852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hq9mj,Uid:34405592-567f-4db0-b081-a1a1696b60db,Namespace:calico-system,Attempt:0,}" Jun 20 19:43:32.117278 containerd[1512]: time="2025-06-20T19:43:32.116957710Z" level=info msg="connecting to shim 9438190a32f555b0e2a811f788f093da61112c13c8b5402114ff4dda2b7d2756" address="unix:///run/containerd/s/afd8124b76021cdb0afbdec4f969b6ebc887e0e0d4baa6f12c1faa52891a99fb" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:43:32.168417 systemd[1]: Started cri-containerd-9438190a32f555b0e2a811f788f093da61112c13c8b5402114ff4dda2b7d2756.scope - libcontainer container 9438190a32f555b0e2a811f788f093da61112c13c8b5402114ff4dda2b7d2756. Jun 20 19:43:32.205528 containerd[1512]: time="2025-06-20T19:43:32.205487296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hq9mj,Uid:34405592-567f-4db0-b081-a1a1696b60db,Namespace:calico-system,Attempt:0,} returns sandbox id \"9438190a32f555b0e2a811f788f093da61112c13c8b5402114ff4dda2b7d2756\"" Jun 20 19:43:32.413375 kubelet[2775]: E0620 19:43:32.412859 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb48f" podUID="39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd" Jun 20 19:43:34.044379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1400622746.mount: Deactivated successfully. Jun 20 19:43:34.414282 kubelet[2775]: E0620 19:43:34.414132 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb48f" podUID="39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd" Jun 20 19:43:35.484304 containerd[1512]: time="2025-06-20T19:43:35.484236474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:35.487248 containerd[1512]: time="2025-06-20T19:43:35.486619807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 20 19:43:35.490529 containerd[1512]: time="2025-06-20T19:43:35.490473303Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:35.494698 containerd[1512]: time="2025-06-20T19:43:35.494655011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:35.496283 containerd[1512]: time="2025-06-20T19:43:35.495800269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 3.548134167s" Jun 20 19:43:35.496283 containerd[1512]: time="2025-06-20T19:43:35.495843721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 20 19:43:35.500237 containerd[1512]: time="2025-06-20T19:43:35.500202233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:43:35.525148 containerd[1512]: time="2025-06-20T19:43:35.525110174Z" level=info msg="CreateContainer within sandbox \"7da221c60eb0698deae7fffd2d78dfac08a8c2dc335594e85b0dab83c9beb69d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:43:35.542155 containerd[1512]: time="2025-06-20T19:43:35.541462826Z" level=info msg="Container 48f845b6db4f3b407e7ab85cb90cf52bab3be6775aa96c78d5191670f9adfc82: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:43:35.547193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928848870.mount: Deactivated successfully. Jun 20 19:43:35.563505 containerd[1512]: time="2025-06-20T19:43:35.563396946Z" level=info msg="CreateContainer within sandbox \"7da221c60eb0698deae7fffd2d78dfac08a8c2dc335594e85b0dab83c9beb69d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"48f845b6db4f3b407e7ab85cb90cf52bab3be6775aa96c78d5191670f9adfc82\"" Jun 20 19:43:35.564445 containerd[1512]: time="2025-06-20T19:43:35.564413040Z" level=info msg="StartContainer for \"48f845b6db4f3b407e7ab85cb90cf52bab3be6775aa96c78d5191670f9adfc82\"" Jun 20 19:43:35.566143 containerd[1512]: time="2025-06-20T19:43:35.566106536Z" level=info msg="connecting to shim 48f845b6db4f3b407e7ab85cb90cf52bab3be6775aa96c78d5191670f9adfc82" address="unix:///run/containerd/s/dd33bee27b04e8cce3122ad53bdde73ce84f197fdf81c5f362bafd5cc05a390a" protocol=ttrpc version=3 Jun 20 19:43:35.606353 systemd[1]: Started cri-containerd-48f845b6db4f3b407e7ab85cb90cf52bab3be6775aa96c78d5191670f9adfc82.scope - libcontainer container 48f845b6db4f3b407e7ab85cb90cf52bab3be6775aa96c78d5191670f9adfc82. Jun 20 19:43:35.681482 containerd[1512]: time="2025-06-20T19:43:35.681428706Z" level=info msg="StartContainer for \"48f845b6db4f3b407e7ab85cb90cf52bab3be6775aa96c78d5191670f9adfc82\" returns successfully" Jun 20 19:43:36.417088 kubelet[2775]: E0620 19:43:36.413956 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb48f" podUID="39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd" Jun 20 19:43:36.634748 kubelet[2775]: E0620 19:43:36.634672 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.634748 kubelet[2775]: W0620 19:43:36.634728 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.635092 kubelet[2775]: E0620 19:43:36.634787 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.637342 kubelet[2775]: E0620 19:43:36.636304 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.637342 kubelet[2775]: W0620 19:43:36.636341 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.637342 kubelet[2775]: E0620 19:43:36.636369 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.638758 kubelet[2775]: E0620 19:43:36.638340 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.638758 kubelet[2775]: W0620 19:43:36.638381 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.638758 kubelet[2775]: E0620 19:43:36.638408 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.641409 kubelet[2775]: E0620 19:43:36.641148 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.641409 kubelet[2775]: W0620 19:43:36.641231 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.641409 kubelet[2775]: E0620 19:43:36.641262 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.642809 kubelet[2775]: E0620 19:43:36.642733 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.642809 kubelet[2775]: W0620 19:43:36.642771 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.642809 kubelet[2775]: E0620 19:43:36.642798 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.644228 kubelet[2775]: E0620 19:43:36.644120 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.644228 kubelet[2775]: W0620 19:43:36.644166 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.646174 kubelet[2775]: E0620 19:43:36.645133 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.648145 kubelet[2775]: E0620 19:43:36.648035 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.649033 kubelet[2775]: W0620 19:43:36.648654 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.649033 kubelet[2775]: E0620 19:43:36.648702 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.650559 kubelet[2775]: E0620 19:43:36.650432 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.650559 kubelet[2775]: W0620 19:43:36.650472 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.650559 kubelet[2775]: E0620 19:43:36.650499 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.652356 kubelet[2775]: E0620 19:43:36.652309 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.652356 kubelet[2775]: W0620 19:43:36.652353 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.652683 kubelet[2775]: E0620 19:43:36.652374 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.653832 kubelet[2775]: E0620 19:43:36.653805 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.654070 kubelet[2775]: W0620 19:43:36.653918 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.654070 kubelet[2775]: E0620 19:43:36.653943 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.654544 kubelet[2775]: E0620 19:43:36.654349 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.654544 kubelet[2775]: W0620 19:43:36.654472 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.654544 kubelet[2775]: E0620 19:43:36.654498 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.655083 kubelet[2775]: E0620 19:43:36.655043 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.655083 kubelet[2775]: W0620 19:43:36.655056 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.655083 kubelet[2775]: E0620 19:43:36.655067 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.655808 kubelet[2775]: E0620 19:43:36.655786 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.655979 kubelet[2775]: W0620 19:43:36.655899 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.655979 kubelet[2775]: E0620 19:43:36.655917 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.656523 kubelet[2775]: E0620 19:43:36.656461 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.656523 kubelet[2775]: W0620 19:43:36.656474 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.656523 kubelet[2775]: E0620 19:43:36.656484 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.657023 kubelet[2775]: E0620 19:43:36.656896 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.657023 kubelet[2775]: W0620 19:43:36.656908 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.657023 kubelet[2775]: E0620 19:43:36.656918 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.665876 kubelet[2775]: E0620 19:43:36.665808 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.665876 kubelet[2775]: W0620 19:43:36.665829 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.665876 kubelet[2775]: E0620 19:43:36.665844 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.666365 kubelet[2775]: E0620 19:43:36.666335 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.666566 kubelet[2775]: W0620 19:43:36.666486 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.666566 kubelet[2775]: E0620 19:43:36.666518 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.666884 kubelet[2775]: E0620 19:43:36.666857 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.667050 kubelet[2775]: W0620 19:43:36.666973 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.667050 kubelet[2775]: E0620 19:43:36.667001 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.667481 kubelet[2775]: E0620 19:43:36.667397 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.668689 kubelet[2775]: W0620 19:43:36.667411 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.668689 kubelet[2775]: E0620 19:43:36.667564 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.668898 kubelet[2775]: E0620 19:43:36.668862 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.669132 kubelet[2775]: W0620 19:43:36.668993 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.669580 kubelet[2775]: E0620 19:43:36.669401 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.670106 kubelet[2775]: E0620 19:43:36.670006 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.670106 kubelet[2775]: W0620 19:43:36.670023 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.670476 kubelet[2775]: E0620 19:43:36.670401 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.670476 kubelet[2775]: W0620 19:43:36.670414 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.670750 kubelet[2775]: E0620 19:43:36.670631 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.670750 kubelet[2775]: E0620 19:43:36.670726 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.671706 kubelet[2775]: E0620 19:43:36.671674 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.671874 kubelet[2775]: W0620 19:43:36.671847 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.672125 kubelet[2775]: E0620 19:43:36.672063 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.672497 kubelet[2775]: E0620 19:43:36.672468 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.672497 kubelet[2775]: W0620 19:43:36.672487 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.672579 kubelet[2775]: E0620 19:43:36.672504 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.672971 kubelet[2775]: E0620 19:43:36.672895 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.672971 kubelet[2775]: W0620 19:43:36.672910 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.672971 kubelet[2775]: E0620 19:43:36.672921 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.673384 kubelet[2775]: E0620 19:43:36.673356 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.673384 kubelet[2775]: W0620 19:43:36.673370 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.673699 kubelet[2775]: E0620 19:43:36.673381 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.673918 kubelet[2775]: E0620 19:43:36.673895 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.673918 kubelet[2775]: W0620 19:43:36.673911 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.673994 kubelet[2775]: E0620 19:43:36.673923 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.674535 kubelet[2775]: E0620 19:43:36.674487 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.674535 kubelet[2775]: W0620 19:43:36.674502 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.674535 kubelet[2775]: E0620 19:43:36.674512 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.674928 kubelet[2775]: E0620 19:43:36.674838 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.674928 kubelet[2775]: W0620 19:43:36.674853 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.674928 kubelet[2775]: E0620 19:43:36.674866 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.675192 kubelet[2775]: E0620 19:43:36.675126 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.675192 kubelet[2775]: W0620 19:43:36.675152 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.675192 kubelet[2775]: E0620 19:43:36.675163 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.675539 kubelet[2775]: E0620 19:43:36.675363 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.675539 kubelet[2775]: W0620 19:43:36.675403 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.675539 kubelet[2775]: E0620 19:43:36.675415 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.675871 kubelet[2775]: E0620 19:43:36.675735 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.675871 kubelet[2775]: W0620 19:43:36.675782 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.675871 kubelet[2775]: E0620 19:43:36.675795 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.676548 kubelet[2775]: E0620 19:43:36.676510 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:36.676548 kubelet[2775]: W0620 19:43:36.676527 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:36.676548 kubelet[2775]: E0620 19:43:36.676538 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:36.678711 kubelet[2775]: I0620 19:43:36.678643 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54689bbd4-4lpq8" podStartSLOduration=4.126046032 podStartE2EDuration="7.678588728s" podCreationTimestamp="2025-06-20 19:43:29 +0000 UTC" firstStartedPulling="2025-06-20 19:43:31.946537383 +0000 UTC m=+23.662162951" lastFinishedPulling="2025-06-20 19:43:35.499080069 +0000 UTC m=+27.214705647" observedRunningTime="2025-06-20 19:43:36.670617056 +0000 UTC m=+28.386242634" watchObservedRunningTime="2025-06-20 19:43:36.678588728 +0000 UTC m=+28.394214306" Jun 20 19:43:37.652632 containerd[1512]: time="2025-06-20T19:43:37.652480937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:37.654906 containerd[1512]: time="2025-06-20T19:43:37.654875990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 20 19:43:37.656279 containerd[1512]: time="2025-06-20T19:43:37.656235294Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:37.660301 containerd[1512]: time="2025-06-20T19:43:37.660257136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:37.661932 containerd[1512]: time="2025-06-20T19:43:37.661879317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 2.161641526s" Jun 20 19:43:37.662008 containerd[1512]: time="2025-06-20T19:43:37.661934742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 20 19:43:37.664904 kubelet[2775]: E0620 19:43:37.664816 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.664904 kubelet[2775]: W0620 19:43:37.664894 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.665358 kubelet[2775]: E0620 19:43:37.664914 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.665358 kubelet[2775]: E0620 19:43:37.665328 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.665358 kubelet[2775]: W0620 19:43:37.665339 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.665358 kubelet[2775]: E0620 19:43:37.665350 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.665591 kubelet[2775]: E0620 19:43:37.665564 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.665591 kubelet[2775]: W0620 19:43:37.665580 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.665591 kubelet[2775]: E0620 19:43:37.665590 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.665791 kubelet[2775]: E0620 19:43:37.665773 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.665791 kubelet[2775]: W0620 19:43:37.665787 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.665876 kubelet[2775]: E0620 19:43:37.665798 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.666079 kubelet[2775]: E0620 19:43:37.666051 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.666126 kubelet[2775]: W0620 19:43:37.666067 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.666126 kubelet[2775]: E0620 19:43:37.666099 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.666341 kubelet[2775]: E0620 19:43:37.666312 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.666341 kubelet[2775]: W0620 19:43:37.666329 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.666341 kubelet[2775]: E0620 19:43:37.666339 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.668924 kubelet[2775]: E0620 19:43:37.667692 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.668924 kubelet[2775]: W0620 19:43:37.667710 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.668924 kubelet[2775]: E0620 19:43:37.667722 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.668924 kubelet[2775]: E0620 19:43:37.667935 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.668924 kubelet[2775]: W0620 19:43:37.667946 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.668924 kubelet[2775]: E0620 19:43:37.667963 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.668924 kubelet[2775]: E0620 19:43:37.668167 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.668924 kubelet[2775]: W0620 19:43:37.668216 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.668924 kubelet[2775]: E0620 19:43:37.668229 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.668924 kubelet[2775]: E0620 19:43:37.668384 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.669606 kubelet[2775]: W0620 19:43:37.668403 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.669606 kubelet[2775]: E0620 19:43:37.668413 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.669606 kubelet[2775]: E0620 19:43:37.668570 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.669606 kubelet[2775]: W0620 19:43:37.668580 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.669606 kubelet[2775]: E0620 19:43:37.668589 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.669606 kubelet[2775]: E0620 19:43:37.668761 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.669606 kubelet[2775]: W0620 19:43:37.668771 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.669606 kubelet[2775]: E0620 19:43:37.668779 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.669606 kubelet[2775]: E0620 19:43:37.668963 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.669606 kubelet[2775]: W0620 19:43:37.668976 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.670231 containerd[1512]: time="2025-06-20T19:43:37.669550036Z" level=info msg="CreateContainer within sandbox \"9438190a32f555b0e2a811f788f093da61112c13c8b5402114ff4dda2b7d2756\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:43:37.670329 kubelet[2775]: E0620 19:43:37.668988 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.670329 kubelet[2775]: E0620 19:43:37.669337 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.670329 kubelet[2775]: W0620 19:43:37.669348 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.670548 kubelet[2775]: E0620 19:43:37.669358 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.670640 kubelet[2775]: E0620 19:43:37.670614 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.670640 kubelet[2775]: W0620 19:43:37.670631 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.671068 kubelet[2775]: E0620 19:43:37.670643 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.680268 kubelet[2775]: E0620 19:43:37.680244 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.684817 kubelet[2775]: W0620 19:43:37.680361 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.684817 kubelet[2775]: E0620 19:43:37.680391 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.684817 kubelet[2775]: E0620 19:43:37.680594 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.684817 kubelet[2775]: W0620 19:43:37.680604 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.684817 kubelet[2775]: E0620 19:43:37.680620 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.684817 kubelet[2775]: E0620 19:43:37.680888 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.684817 kubelet[2775]: W0620 19:43:37.680934 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.684817 kubelet[2775]: E0620 19:43:37.680966 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.684817 kubelet[2775]: E0620 19:43:37.681251 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.684817 kubelet[2775]: W0620 19:43:37.681263 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.685113 kubelet[2775]: E0620 19:43:37.681274 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.685113 kubelet[2775]: E0620 19:43:37.681527 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.685113 kubelet[2775]: W0620 19:43:37.681538 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.685113 kubelet[2775]: E0620 19:43:37.681555 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.685113 kubelet[2775]: E0620 19:43:37.681817 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.685113 kubelet[2775]: W0620 19:43:37.681854 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.685113 kubelet[2775]: E0620 19:43:37.681872 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.685113 kubelet[2775]: E0620 19:43:37.682074 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.685113 kubelet[2775]: W0620 19:43:37.682084 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.685113 kubelet[2775]: E0620 19:43:37.682130 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.685709 kubelet[2775]: E0620 19:43:37.682433 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.685709 kubelet[2775]: W0620 19:43:37.682443 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.685709 kubelet[2775]: E0620 19:43:37.682524 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.685709 kubelet[2775]: E0620 19:43:37.683264 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.685709 kubelet[2775]: W0620 19:43:37.683277 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.685709 kubelet[2775]: E0620 19:43:37.683290 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.685709 kubelet[2775]: E0620 19:43:37.683478 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.685709 kubelet[2775]: W0620 19:43:37.683489 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.685709 kubelet[2775]: E0620 19:43:37.683507 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.685709 kubelet[2775]: E0620 19:43:37.683718 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.686022 kubelet[2775]: W0620 19:43:37.683730 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.686022 kubelet[2775]: E0620 19:43:37.683746 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.686022 kubelet[2775]: E0620 19:43:37.684035 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.686022 kubelet[2775]: W0620 19:43:37.684045 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.686022 kubelet[2775]: E0620 19:43:37.684074 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.686022 kubelet[2775]: E0620 19:43:37.684454 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.686022 kubelet[2775]: W0620 19:43:37.684464 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.686022 kubelet[2775]: E0620 19:43:37.684502 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.686022 kubelet[2775]: E0620 19:43:37.684772 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.686022 kubelet[2775]: W0620 19:43:37.684783 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.686413 kubelet[2775]: E0620 19:43:37.685436 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.686413 kubelet[2775]: E0620 19:43:37.685472 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.686413 kubelet[2775]: W0620 19:43:37.685483 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.686413 kubelet[2775]: E0620 19:43:37.685504 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.686413 kubelet[2775]: E0620 19:43:37.685702 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.686413 kubelet[2775]: W0620 19:43:37.685712 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.686413 kubelet[2775]: E0620 19:43:37.685722 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.686413 kubelet[2775]: E0620 19:43:37.685963 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.686413 kubelet[2775]: W0620 19:43:37.685973 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.686413 kubelet[2775]: E0620 19:43:37.685983 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.686678 kubelet[2775]: E0620 19:43:37.686664 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:43:37.686678 kubelet[2775]: W0620 19:43:37.686675 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:43:37.686737 kubelet[2775]: E0620 19:43:37.686685 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:43:37.691770 containerd[1512]: time="2025-06-20T19:43:37.691710703Z" level=info msg="Container 57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:43:37.702316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839475769.mount: Deactivated successfully. Jun 20 19:43:37.714197 containerd[1512]: time="2025-06-20T19:43:37.712438928Z" level=info msg="CreateContainer within sandbox \"9438190a32f555b0e2a811f788f093da61112c13c8b5402114ff4dda2b7d2756\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767\"" Jun 20 19:43:37.716167 containerd[1512]: time="2025-06-20T19:43:37.716041167Z" level=info msg="StartContainer for \"57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767\"" Jun 20 19:43:37.719789 containerd[1512]: time="2025-06-20T19:43:37.719736251Z" level=info msg="connecting to shim 57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767" address="unix:///run/containerd/s/afd8124b76021cdb0afbdec4f969b6ebc887e0e0d4baa6f12c1faa52891a99fb" protocol=ttrpc version=3 Jun 20 19:43:37.757374 systemd[1]: Started cri-containerd-57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767.scope - libcontainer container 57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767. Jun 20 19:43:37.809056 containerd[1512]: time="2025-06-20T19:43:37.809022065Z" level=info msg="StartContainer for \"57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767\" returns successfully" Jun 20 19:43:37.811786 systemd[1]: cri-containerd-57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767.scope: Deactivated successfully. Jun 20 19:43:37.817592 containerd[1512]: time="2025-06-20T19:43:37.817542673Z" level=info msg="received exit event container_id:\"57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767\" id:\"57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767\" pid:3507 exited_at:{seconds:1750448617 nanos:814926401}" Jun 20 19:43:37.818510 containerd[1512]: time="2025-06-20T19:43:37.818478915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767\" id:\"57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767\" pid:3507 exited_at:{seconds:1750448617 nanos:814926401}" Jun 20 19:43:37.849150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57ccb4385becbccfc32781aa340bc75250327697e49b2d1c2f41a234af834767-rootfs.mount: Deactivated successfully. Jun 20 19:43:38.415498 kubelet[2775]: E0620 19:43:38.414696 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb48f" podUID="39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd" Jun 20 19:43:39.659469 containerd[1512]: time="2025-06-20T19:43:39.655870374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:43:40.412562 kubelet[2775]: E0620 19:43:40.412373 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb48f" podUID="39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd" Jun 20 19:43:42.414280 kubelet[2775]: E0620 19:43:42.412839 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb48f" podUID="39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd" Jun 20 19:43:44.412001 kubelet[2775]: E0620 19:43:44.411887 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb48f" podUID="39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd" Jun 20 19:43:44.971605 containerd[1512]: time="2025-06-20T19:43:44.971543125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:44.973257 containerd[1512]: time="2025-06-20T19:43:44.973191693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 20 19:43:44.974868 containerd[1512]: time="2025-06-20T19:43:44.974823780Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:44.977994 containerd[1512]: time="2025-06-20T19:43:44.977957016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:44.978724 containerd[1512]: time="2025-06-20T19:43:44.978689652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 5.317907624s" Jun 20 19:43:44.978724 containerd[1512]: time="2025-06-20T19:43:44.978723386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 20 19:43:44.983738 containerd[1512]: time="2025-06-20T19:43:44.983688777Z" level=info msg="CreateContainer within sandbox \"9438190a32f555b0e2a811f788f093da61112c13c8b5402114ff4dda2b7d2756\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:43:45.003460 containerd[1512]: time="2025-06-20T19:43:45.003341027Z" level=info msg="Container 21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:43:45.021123 containerd[1512]: time="2025-06-20T19:43:45.021068634Z" level=info msg="CreateContainer within sandbox \"9438190a32f555b0e2a811f788f093da61112c13c8b5402114ff4dda2b7d2756\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce\"" Jun 20 19:43:45.022205 containerd[1512]: time="2025-06-20T19:43:45.021825396Z" level=info msg="StartContainer for \"21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce\"" Jun 20 19:43:45.026017 containerd[1512]: time="2025-06-20T19:43:45.025863161Z" level=info msg="connecting to shim 21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce" address="unix:///run/containerd/s/afd8124b76021cdb0afbdec4f969b6ebc887e0e0d4baa6f12c1faa52891a99fb" protocol=ttrpc version=3 Jun 20 19:43:45.070548 systemd[1]: Started cri-containerd-21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce.scope - libcontainer container 21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce. Jun 20 19:43:45.133194 containerd[1512]: time="2025-06-20T19:43:45.133095834Z" level=info msg="StartContainer for \"21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce\" returns successfully" Jun 20 19:43:46.415234 kubelet[2775]: E0620 19:43:46.414114 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb48f" podUID="39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd" Jun 20 19:43:47.007889 containerd[1512]: time="2025-06-20T19:43:47.007637538Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:43:47.014583 systemd[1]: cri-containerd-21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce.scope: Deactivated successfully. Jun 20 19:43:47.017337 systemd[1]: cri-containerd-21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce.scope: Consumed 1.164s CPU time, 194.2M memory peak, 171.2M written to disk. Jun 20 19:43:47.021877 containerd[1512]: time="2025-06-20T19:43:47.021677678Z" level=info msg="received exit event container_id:\"21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce\" id:\"21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce\" pid:3567 exited_at:{seconds:1750448627 nanos:21308810}" Jun 20 19:43:47.021877 containerd[1512]: time="2025-06-20T19:43:47.021845435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce\" id:\"21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce\" pid:3567 exited_at:{seconds:1750448627 nanos:21308810}" Jun 20 19:43:47.039124 kubelet[2775]: I0620 19:43:47.037417 2775 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:43:47.105507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21a84741099d7ff52f730b3c969e4029e2dea6dd7d83663627ac68359f574cce-rootfs.mount: Deactivated successfully. Jun 20 19:43:47.126099 systemd[1]: Created slice kubepods-burstable-pod408b62a3_728a_42d2_86d9_4db693b39e20.slice - libcontainer container kubepods-burstable-pod408b62a3_728a_42d2_86d9_4db693b39e20.slice. Jun 20 19:43:47.137600 systemd[1]: Created slice kubepods-burstable-pode57eaa74_3032_4619_bb92_dee9a0197bd2.slice - libcontainer container kubepods-burstable-pode57eaa74_3032_4619_bb92_dee9a0197bd2.slice. Jun 20 19:43:47.469768 kubelet[2775]: I0620 19:43:47.175550 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/408b62a3-728a-42d2-86d9-4db693b39e20-config-volume\") pod \"coredns-668d6bf9bc-mq9jd\" (UID: \"408b62a3-728a-42d2-86d9-4db693b39e20\") " pod="kube-system/coredns-668d6bf9bc-mq9jd" Jun 20 19:43:47.469768 kubelet[2775]: I0620 19:43:47.175643 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e57eaa74-3032-4619-bb92-dee9a0197bd2-config-volume\") pod \"coredns-668d6bf9bc-z95lx\" (UID: \"e57eaa74-3032-4619-bb92-dee9a0197bd2\") " pod="kube-system/coredns-668d6bf9bc-z95lx" Jun 20 19:43:47.469768 kubelet[2775]: I0620 19:43:47.175668 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8qhg\" (UniqueName: \"kubernetes.io/projected/e57eaa74-3032-4619-bb92-dee9a0197bd2-kube-api-access-k8qhg\") pod \"coredns-668d6bf9bc-z95lx\" (UID: \"e57eaa74-3032-4619-bb92-dee9a0197bd2\") " pod="kube-system/coredns-668d6bf9bc-z95lx" Jun 20 19:43:47.469768 kubelet[2775]: I0620 19:43:47.175733 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zttw7\" (UniqueName: \"kubernetes.io/projected/408b62a3-728a-42d2-86d9-4db693b39e20-kube-api-access-zttw7\") pod \"coredns-668d6bf9bc-mq9jd\" (UID: \"408b62a3-728a-42d2-86d9-4db693b39e20\") " pod="kube-system/coredns-668d6bf9bc-mq9jd" Jun 20 19:43:47.550879 systemd[1]: Created slice kubepods-besteffort-pod4d816cc2_af80_4ebc_a21e_c0daa7f3bd4f.slice - libcontainer container kubepods-besteffort-pod4d816cc2_af80_4ebc_a21e_c0daa7f3bd4f.slice. Jun 20 19:43:47.599208 kubelet[2775]: I0620 19:43:47.598639 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f-tigera-ca-bundle\") pod \"calico-kube-controllers-5bbd87cbf5-cz57f\" (UID: \"4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f\") " pod="calico-system/calico-kube-controllers-5bbd87cbf5-cz57f" Jun 20 19:43:47.599885 kubelet[2775]: I0620 19:43:47.599736 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-whisker-backend-key-pair\") pod \"whisker-86596f5f9b-psrk5\" (UID: \"30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b\") " pod="calico-system/whisker-86596f5f9b-psrk5" Jun 20 19:43:47.599885 kubelet[2775]: I0620 19:43:47.599817 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-whisker-ca-bundle\") pod \"whisker-86596f5f9b-psrk5\" (UID: \"30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b\") " pod="calico-system/whisker-86596f5f9b-psrk5" Jun 20 19:43:47.600284 kubelet[2775]: I0620 19:43:47.599848 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0-calico-apiserver-certs\") pod \"calico-apiserver-8f5b5557d-6rptn\" (UID: \"47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0\") " pod="calico-apiserver/calico-apiserver-8f5b5557d-6rptn" Jun 20 19:43:47.600450 kubelet[2775]: I0620 19:43:47.600263 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4f4j\" (UniqueName: \"kubernetes.io/projected/4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f-kube-api-access-m4f4j\") pod \"calico-kube-controllers-5bbd87cbf5-cz57f\" (UID: \"4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f\") " pod="calico-system/calico-kube-controllers-5bbd87cbf5-cz57f" Jun 20 19:43:47.600450 kubelet[2775]: I0620 19:43:47.600423 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8-calico-apiserver-certs\") pod \"calico-apiserver-8f5b5557d-j7slj\" (UID: \"807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8\") " pod="calico-apiserver/calico-apiserver-8f5b5557d-j7slj" Jun 20 19:43:47.601593 kubelet[2775]: I0620 19:43:47.601038 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jcw7\" (UniqueName: \"kubernetes.io/projected/f88145ed-7b64-4352-a338-c1198720e562-kube-api-access-2jcw7\") pod \"calico-apiserver-7bcf54c67d-bm2gt\" (UID: \"f88145ed-7b64-4352-a338-c1198720e562\") " pod="calico-apiserver/calico-apiserver-7bcf54c67d-bm2gt" Jun 20 19:43:47.601593 kubelet[2775]: I0620 19:43:47.601142 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp5lt\" (UniqueName: \"kubernetes.io/projected/47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0-kube-api-access-hp5lt\") pod \"calico-apiserver-8f5b5557d-6rptn\" (UID: \"47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0\") " pod="calico-apiserver/calico-apiserver-8f5b5557d-6rptn" Jun 20 19:43:47.604576 systemd[1]: Created slice kubepods-besteffort-pod47185c6a_c8ab_4dd4_92fc_fb3871ec6bc0.slice - libcontainer container kubepods-besteffort-pod47185c6a_c8ab_4dd4_92fc_fb3871ec6bc0.slice. Jun 20 19:43:47.605373 kubelet[2775]: I0620 19:43:47.605144 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lgwk\" (UniqueName: \"kubernetes.io/projected/807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8-kube-api-access-6lgwk\") pod \"calico-apiserver-8f5b5557d-j7slj\" (UID: \"807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8\") " pod="calico-apiserver/calico-apiserver-8f5b5557d-j7slj" Jun 20 19:43:47.607700 kubelet[2775]: I0620 19:43:47.607672 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7t4\" (UniqueName: \"kubernetes.io/projected/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-kube-api-access-ch7t4\") pod \"whisker-86596f5f9b-psrk5\" (UID: \"30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b\") " pod="calico-system/whisker-86596f5f9b-psrk5" Jun 20 19:43:47.607911 kubelet[2775]: I0620 19:43:47.607891 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f88145ed-7b64-4352-a338-c1198720e562-calico-apiserver-certs\") pod \"calico-apiserver-7bcf54c67d-bm2gt\" (UID: \"f88145ed-7b64-4352-a338-c1198720e562\") " pod="calico-apiserver/calico-apiserver-7bcf54c67d-bm2gt" Jun 20 19:43:47.617434 systemd[1]: Created slice kubepods-besteffort-podf88145ed_7b64_4352_a338_c1198720e562.slice - libcontainer container kubepods-besteffort-podf88145ed_7b64_4352_a338_c1198720e562.slice. Jun 20 19:43:47.626678 systemd[1]: Created slice kubepods-besteffort-pod807e0faf_4cc3_44e6_a4b0_dcafcc5bbcf8.slice - libcontainer container kubepods-besteffort-pod807e0faf_4cc3_44e6_a4b0_dcafcc5bbcf8.slice. Jun 20 19:43:47.633227 systemd[1]: Created slice kubepods-besteffort-podbe112bd7_6812_4441_9f4a_bbae19965424.slice - libcontainer container kubepods-besteffort-podbe112bd7_6812_4441_9f4a_bbae19965424.slice. Jun 20 19:43:47.640108 systemd[1]: Created slice kubepods-besteffort-pod30a05ee6_6dba_44f8_92c5_17d3d6bc6e1b.slice - libcontainer container kubepods-besteffort-pod30a05ee6_6dba_44f8_92c5_17d3d6bc6e1b.slice. Jun 20 19:43:47.709851 kubelet[2775]: I0620 19:43:47.709147 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be112bd7-6812-4441-9f4a-bbae19965424-config\") pod \"goldmane-5bd85449d4-nq7xc\" (UID: \"be112bd7-6812-4441-9f4a-bbae19965424\") " pod="calico-system/goldmane-5bd85449d4-nq7xc" Jun 20 19:43:47.709851 kubelet[2775]: I0620 19:43:47.709292 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/be112bd7-6812-4441-9f4a-bbae19965424-goldmane-key-pair\") pod \"goldmane-5bd85449d4-nq7xc\" (UID: \"be112bd7-6812-4441-9f4a-bbae19965424\") " pod="calico-system/goldmane-5bd85449d4-nq7xc" Jun 20 19:43:47.709851 kubelet[2775]: I0620 19:43:47.709480 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be112bd7-6812-4441-9f4a-bbae19965424-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-nq7xc\" (UID: \"be112bd7-6812-4441-9f4a-bbae19965424\") " pod="calico-system/goldmane-5bd85449d4-nq7xc" Jun 20 19:43:47.709851 kubelet[2775]: I0620 19:43:47.709528 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4sh\" (UniqueName: \"kubernetes.io/projected/be112bd7-6812-4441-9f4a-bbae19965424-kube-api-access-kg4sh\") pod \"goldmane-5bd85449d4-nq7xc\" (UID: \"be112bd7-6812-4441-9f4a-bbae19965424\") " pod="calico-system/goldmane-5bd85449d4-nq7xc" Jun 20 19:43:47.776432 containerd[1512]: time="2025-06-20T19:43:47.774583797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z95lx,Uid:e57eaa74-3032-4619-bb92-dee9a0197bd2,Namespace:kube-system,Attempt:0,}" Jun 20 19:43:47.776956 containerd[1512]: time="2025-06-20T19:43:47.776915075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mq9jd,Uid:408b62a3-728a-42d2-86d9-4db693b39e20,Namespace:kube-system,Attempt:0,}" Jun 20 19:43:48.190012 containerd[1512]: time="2025-06-20T19:43:48.189420888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bbd87cbf5-cz57f,Uid:4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f,Namespace:calico-system,Attempt:0,}" Jun 20 19:43:48.213529 containerd[1512]: time="2025-06-20T19:43:48.213400680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f5b5557d-6rptn,Uid:47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:43:48.223395 containerd[1512]: time="2025-06-20T19:43:48.223342108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf54c67d-bm2gt,Uid:f88145ed-7b64-4352-a338-c1198720e562,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:43:48.236002 containerd[1512]: time="2025-06-20T19:43:48.232321146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f5b5557d-j7slj,Uid:807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:43:48.237226 containerd[1512]: time="2025-06-20T19:43:48.237156839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-nq7xc,Uid:be112bd7-6812-4441-9f4a-bbae19965424,Namespace:calico-system,Attempt:0,}" Jun 20 19:43:48.247072 containerd[1512]: time="2025-06-20T19:43:48.247018245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86596f5f9b-psrk5,Uid:30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b,Namespace:calico-system,Attempt:0,}" Jun 20 19:43:48.294444 containerd[1512]: time="2025-06-20T19:43:48.294213141Z" level=error msg="Failed to destroy network for sandbox \"60a642533cc0be3c39340934602a1bfd0e718fac5c25208c7579adb683083ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.295015 containerd[1512]: time="2025-06-20T19:43:48.294980593Z" level=error msg="Failed to destroy network for sandbox \"57fb493e22e8a34a67bb60089134386ca15361fb994cd9a45fe12ab795a8f323\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.297974 containerd[1512]: time="2025-06-20T19:43:48.297889353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mq9jd,Uid:408b62a3-728a-42d2-86d9-4db693b39e20,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a642533cc0be3c39340934602a1bfd0e718fac5c25208c7579adb683083ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.298719 kubelet[2775]: E0620 19:43:48.298634 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a642533cc0be3c39340934602a1bfd0e718fac5c25208c7579adb683083ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.299053 kubelet[2775]: E0620 19:43:48.298995 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a642533cc0be3c39340934602a1bfd0e718fac5c25208c7579adb683083ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mq9jd" Jun 20 19:43:48.299323 kubelet[2775]: E0620 19:43:48.299242 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a642533cc0be3c39340934602a1bfd0e718fac5c25208c7579adb683083ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mq9jd" Jun 20 19:43:48.300378 kubelet[2775]: E0620 19:43:48.299974 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mq9jd_kube-system(408b62a3-728a-42d2-86d9-4db693b39e20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mq9jd_kube-system(408b62a3-728a-42d2-86d9-4db693b39e20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60a642533cc0be3c39340934602a1bfd0e718fac5c25208c7579adb683083ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mq9jd" podUID="408b62a3-728a-42d2-86d9-4db693b39e20" Jun 20 19:43:48.303679 containerd[1512]: time="2025-06-20T19:43:48.303610821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z95lx,Uid:e57eaa74-3032-4619-bb92-dee9a0197bd2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57fb493e22e8a34a67bb60089134386ca15361fb994cd9a45fe12ab795a8f323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.304153 kubelet[2775]: E0620 19:43:48.303845 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57fb493e22e8a34a67bb60089134386ca15361fb994cd9a45fe12ab795a8f323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.304153 kubelet[2775]: E0620 19:43:48.303896 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57fb493e22e8a34a67bb60089134386ca15361fb994cd9a45fe12ab795a8f323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z95lx" Jun 20 19:43:48.304153 kubelet[2775]: E0620 19:43:48.303919 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57fb493e22e8a34a67bb60089134386ca15361fb994cd9a45fe12ab795a8f323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z95lx" Jun 20 19:43:48.304332 kubelet[2775]: E0620 19:43:48.303962 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z95lx_kube-system(e57eaa74-3032-4619-bb92-dee9a0197bd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z95lx_kube-system(e57eaa74-3032-4619-bb92-dee9a0197bd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57fb493e22e8a34a67bb60089134386ca15361fb994cd9a45fe12ab795a8f323\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z95lx" podUID="e57eaa74-3032-4619-bb92-dee9a0197bd2" Jun 20 19:43:48.422796 systemd[1]: Created slice kubepods-besteffort-pod39f7ce5c_cc5a_4e2a_84b2_7eb435c846cd.slice - libcontainer container kubepods-besteffort-pod39f7ce5c_cc5a_4e2a_84b2_7eb435c846cd.slice. Jun 20 19:43:48.433830 containerd[1512]: time="2025-06-20T19:43:48.433784166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vb48f,Uid:39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd,Namespace:calico-system,Attempt:0,}" Jun 20 19:43:48.490262 containerd[1512]: time="2025-06-20T19:43:48.489660578Z" level=error msg="Failed to destroy network for sandbox \"d41f8dcb6073e61f4883776ff672180d38ad4b1c7956ab531fe371f7fe6ab403\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.497419 containerd[1512]: time="2025-06-20T19:43:48.497361209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f5b5557d-j7slj,Uid:807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d41f8dcb6073e61f4883776ff672180d38ad4b1c7956ab531fe371f7fe6ab403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.498091 kubelet[2775]: E0620 19:43:48.497926 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d41f8dcb6073e61f4883776ff672180d38ad4b1c7956ab531fe371f7fe6ab403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.498726 kubelet[2775]: E0620 19:43:48.498564 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d41f8dcb6073e61f4883776ff672180d38ad4b1c7956ab531fe371f7fe6ab403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f5b5557d-j7slj" Jun 20 19:43:48.498726 kubelet[2775]: E0620 19:43:48.498631 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d41f8dcb6073e61f4883776ff672180d38ad4b1c7956ab531fe371f7fe6ab403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f5b5557d-j7slj" Jun 20 19:43:48.499937 kubelet[2775]: E0620 19:43:48.499705 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f5b5557d-j7slj_calico-apiserver(807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f5b5557d-j7slj_calico-apiserver(807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d41f8dcb6073e61f4883776ff672180d38ad4b1c7956ab531fe371f7fe6ab403\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f5b5557d-j7slj" podUID="807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8" Jun 20 19:43:48.503105 containerd[1512]: time="2025-06-20T19:43:48.503045456Z" level=error msg="Failed to destroy network for sandbox \"5415eaddc35d758481e22d7063f00b3543184b10c7d1e8b8edb45a8ce1fe5111\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.505128 containerd[1512]: time="2025-06-20T19:43:48.504789564Z" level=error msg="Failed to destroy network for sandbox \"b0302f39c83e42dc24ff96b524ccaf75fae4a49f5d0f4de7ed0849315d53a48b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.506058 containerd[1512]: time="2025-06-20T19:43:48.506001826Z" level=error msg="Failed to destroy network for sandbox \"452aa0971f7805701a664de67b9566bd25ac16737ca114425816fd2cc252abff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.508569 containerd[1512]: time="2025-06-20T19:43:48.506352860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf54c67d-bm2gt,Uid:f88145ed-7b64-4352-a338-c1198720e562,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5415eaddc35d758481e22d7063f00b3543184b10c7d1e8b8edb45a8ce1fe5111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.509449 kubelet[2775]: E0620 19:43:48.508682 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5415eaddc35d758481e22d7063f00b3543184b10c7d1e8b8edb45a8ce1fe5111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.509449 kubelet[2775]: E0620 19:43:48.508749 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5415eaddc35d758481e22d7063f00b3543184b10c7d1e8b8edb45a8ce1fe5111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bcf54c67d-bm2gt" Jun 20 19:43:48.509449 kubelet[2775]: E0620 19:43:48.508775 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5415eaddc35d758481e22d7063f00b3543184b10c7d1e8b8edb45a8ce1fe5111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bcf54c67d-bm2gt" Jun 20 19:43:48.509671 kubelet[2775]: E0620 19:43:48.508822 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bcf54c67d-bm2gt_calico-apiserver(f88145ed-7b64-4352-a338-c1198720e562)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bcf54c67d-bm2gt_calico-apiserver(f88145ed-7b64-4352-a338-c1198720e562)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5415eaddc35d758481e22d7063f00b3543184b10c7d1e8b8edb45a8ce1fe5111\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bcf54c67d-bm2gt" podUID="f88145ed-7b64-4352-a338-c1198720e562" Jun 20 19:43:48.514016 containerd[1512]: time="2025-06-20T19:43:48.513892345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f5b5557d-6rptn,Uid:47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0302f39c83e42dc24ff96b524ccaf75fae4a49f5d0f4de7ed0849315d53a48b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.514567 kubelet[2775]: E0620 19:43:48.514522 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0302f39c83e42dc24ff96b524ccaf75fae4a49f5d0f4de7ed0849315d53a48b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.515557 kubelet[2775]: E0620 19:43:48.515302 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0302f39c83e42dc24ff96b524ccaf75fae4a49f5d0f4de7ed0849315d53a48b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f5b5557d-6rptn" Jun 20 19:43:48.515557 kubelet[2775]: E0620 19:43:48.515366 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0302f39c83e42dc24ff96b524ccaf75fae4a49f5d0f4de7ed0849315d53a48b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f5b5557d-6rptn" Jun 20 19:43:48.515557 kubelet[2775]: E0620 19:43:48.515419 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f5b5557d-6rptn_calico-apiserver(47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f5b5557d-6rptn_calico-apiserver(47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0302f39c83e42dc24ff96b524ccaf75fae4a49f5d0f4de7ed0849315d53a48b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f5b5557d-6rptn" podUID="47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0" Jun 20 19:43:48.517310 containerd[1512]: time="2025-06-20T19:43:48.517224827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bbd87cbf5-cz57f,Uid:4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"452aa0971f7805701a664de67b9566bd25ac16737ca114425816fd2cc252abff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.517926 kubelet[2775]: E0620 19:43:48.517496 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452aa0971f7805701a664de67b9566bd25ac16737ca114425816fd2cc252abff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.517926 kubelet[2775]: E0620 19:43:48.517541 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452aa0971f7805701a664de67b9566bd25ac16737ca114425816fd2cc252abff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bbd87cbf5-cz57f" Jun 20 19:43:48.517926 kubelet[2775]: E0620 19:43:48.517563 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452aa0971f7805701a664de67b9566bd25ac16737ca114425816fd2cc252abff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bbd87cbf5-cz57f" Jun 20 19:43:48.518055 kubelet[2775]: E0620 19:43:48.517601 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bbd87cbf5-cz57f_calico-system(4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bbd87cbf5-cz57f_calico-system(4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"452aa0971f7805701a664de67b9566bd25ac16737ca114425816fd2cc252abff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bbd87cbf5-cz57f" podUID="4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f" Jun 20 19:43:48.534628 containerd[1512]: time="2025-06-20T19:43:48.534490934Z" level=error msg="Failed to destroy network for sandbox \"9c00136c9ceec90bf937c645a3ebf94ddd7c299a198d958b4b87fd6ae57e1bc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.537411 containerd[1512]: time="2025-06-20T19:43:48.537367613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-nq7xc,Uid:be112bd7-6812-4441-9f4a-bbae19965424,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c00136c9ceec90bf937c645a3ebf94ddd7c299a198d958b4b87fd6ae57e1bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.538341 kubelet[2775]: E0620 19:43:48.537760 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c00136c9ceec90bf937c645a3ebf94ddd7c299a198d958b4b87fd6ae57e1bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.538341 kubelet[2775]: E0620 19:43:48.537838 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c00136c9ceec90bf937c645a3ebf94ddd7c299a198d958b4b87fd6ae57e1bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-nq7xc" Jun 20 19:43:48.538341 kubelet[2775]: E0620 19:43:48.537870 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c00136c9ceec90bf937c645a3ebf94ddd7c299a198d958b4b87fd6ae57e1bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-nq7xc" Jun 20 19:43:48.538744 kubelet[2775]: E0620 19:43:48.537947 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-nq7xc_calico-system(be112bd7-6812-4441-9f4a-bbae19965424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-nq7xc_calico-system(be112bd7-6812-4441-9f4a-bbae19965424)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c00136c9ceec90bf937c645a3ebf94ddd7c299a198d958b4b87fd6ae57e1bc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-nq7xc" podUID="be112bd7-6812-4441-9f4a-bbae19965424" Jun 20 19:43:48.541755 containerd[1512]: time="2025-06-20T19:43:48.541354051Z" level=error msg="Failed to destroy network for sandbox \"f1853e196b2fdfc2653c0145a358fba9679f3350ef92feb45076629beed22049\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.546662 containerd[1512]: time="2025-06-20T19:43:48.546600802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86596f5f9b-psrk5,Uid:30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1853e196b2fdfc2653c0145a358fba9679f3350ef92feb45076629beed22049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.547230 kubelet[2775]: E0620 19:43:48.547165 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1853e196b2fdfc2653c0145a358fba9679f3350ef92feb45076629beed22049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.547376 kubelet[2775]: E0620 19:43:48.547259 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1853e196b2fdfc2653c0145a358fba9679f3350ef92feb45076629beed22049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86596f5f9b-psrk5" Jun 20 19:43:48.547376 kubelet[2775]: E0620 19:43:48.547284 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1853e196b2fdfc2653c0145a358fba9679f3350ef92feb45076629beed22049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86596f5f9b-psrk5" Jun 20 19:43:48.547868 kubelet[2775]: E0620 19:43:48.547408 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-86596f5f9b-psrk5_calico-system(30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-86596f5f9b-psrk5_calico-system(30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1853e196b2fdfc2653c0145a358fba9679f3350ef92feb45076629beed22049\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86596f5f9b-psrk5" podUID="30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b" Jun 20 19:43:48.578987 containerd[1512]: time="2025-06-20T19:43:48.578919952Z" level=error msg="Failed to destroy network for sandbox \"e8b91b8b39f60ea10c26a614931313fb444a4f10f94bcab01e05b1d3c8124c3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.581347 containerd[1512]: time="2025-06-20T19:43:48.581248544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vb48f,Uid:39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b91b8b39f60ea10c26a614931313fb444a4f10f94bcab01e05b1d3c8124c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.581761 kubelet[2775]: E0620 19:43:48.581720 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b91b8b39f60ea10c26a614931313fb444a4f10f94bcab01e05b1d3c8124c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:43:48.582033 kubelet[2775]: E0620 19:43:48.581973 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b91b8b39f60ea10c26a614931313fb444a4f10f94bcab01e05b1d3c8124c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vb48f" Jun 20 19:43:48.582033 kubelet[2775]: E0620 19:43:48.582008 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b91b8b39f60ea10c26a614931313fb444a4f10f94bcab01e05b1d3c8124c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vb48f" Jun 20 19:43:48.582341 kubelet[2775]: E0620 19:43:48.582238 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vb48f_calico-system(39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vb48f_calico-system(39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8b91b8b39f60ea10c26a614931313fb444a4f10f94bcab01e05b1d3c8124c3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vb48f" podUID="39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd" Jun 20 19:43:48.719107 containerd[1512]: time="2025-06-20T19:43:48.718948490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:43:49.105447 systemd[1]: run-netns-cni\x2d81d51296\x2d4ab8\x2d5fdf\x2d7f8b\x2d51bfa2b44fde.mount: Deactivated successfully. Jun 20 19:43:49.105733 systemd[1]: run-netns-cni\x2d82fd9657\x2d829a\x2de5e5\x2d5aef\x2d01189f493ac1.mount: Deactivated successfully. Jun 20 19:43:49.105920 systemd[1]: run-netns-cni\x2dd4e5e6a3\x2d2dfa\x2dd1c0\x2d8e9e\x2d551f6fde5d39.mount: Deactivated successfully. Jun 20 19:43:49.106372 systemd[1]: run-netns-cni\x2df79e3206\x2d2676\x2d5f4a\x2dc254\x2d0804f83428a3.mount: Deactivated successfully. Jun 20 19:43:49.106559 systemd[1]: run-netns-cni\x2d9edf4540\x2d9a30\x2da4ab\x2d3aea\x2d64a2d2e1f0ee.mount: Deactivated successfully. Jun 20 19:43:49.106711 systemd[1]: run-netns-cni\x2d49ec2f53\x2dfa77\x2d8a41\x2d9988\x2d16eee76d1383.mount: Deactivated successfully. Jun 20 19:43:59.309514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount50152994.mount: Deactivated successfully. Jun 20 19:43:59.334430 containerd[1512]: time="2025-06-20T19:43:59.334124100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:59.336067 containerd[1512]: time="2025-06-20T19:43:59.335822611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 20 19:43:59.338201 containerd[1512]: time="2025-06-20T19:43:59.337732128Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:59.340520 containerd[1512]: time="2025-06-20T19:43:59.340485294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:43:59.341299 containerd[1512]: time="2025-06-20T19:43:59.341262891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 10.622126195s" Jun 20 19:43:59.341403 containerd[1512]: time="2025-06-20T19:43:59.341385961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 20 19:43:59.370789 containerd[1512]: time="2025-06-20T19:43:59.370726226Z" level=info msg="CreateContainer within sandbox \"9438190a32f555b0e2a811f788f093da61112c13c8b5402114ff4dda2b7d2756\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:43:59.410239 containerd[1512]: time="2025-06-20T19:43:59.406624845Z" level=info msg="Container fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:43:59.409647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2785405172.mount: Deactivated successfully. Jun 20 19:43:59.451346 containerd[1512]: time="2025-06-20T19:43:59.451284914Z" level=info msg="CreateContainer within sandbox \"9438190a32f555b0e2a811f788f093da61112c13c8b5402114ff4dda2b7d2756\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d\"" Jun 20 19:43:59.453443 containerd[1512]: time="2025-06-20T19:43:59.453388485Z" level=info msg="StartContainer for \"fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d\"" Jun 20 19:43:59.457338 containerd[1512]: time="2025-06-20T19:43:59.457293438Z" level=info msg="connecting to shim fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d" address="unix:///run/containerd/s/afd8124b76021cdb0afbdec4f969b6ebc887e0e0d4baa6f12c1faa52891a99fb" protocol=ttrpc version=3 Jun 20 19:43:59.622637 systemd[1]: Started cri-containerd-fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d.scope - libcontainer container fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d. Jun 20 19:43:59.761214 containerd[1512]: time="2025-06-20T19:43:59.760712508Z" level=info msg="StartContainer for \"fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d\" returns successfully" Jun 20 19:43:59.902226 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:43:59.902413 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:43:59.929503 kubelet[2775]: I0620 19:43:59.928771 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hq9mj" podStartSLOduration=2.792633291 podStartE2EDuration="29.928509777s" podCreationTimestamp="2025-06-20 19:43:30 +0000 UTC" firstStartedPulling="2025-06-20 19:43:32.207214559 +0000 UTC m=+23.922840137" lastFinishedPulling="2025-06-20 19:43:59.343091055 +0000 UTC m=+51.058716623" observedRunningTime="2025-06-20 19:43:59.92552711 +0000 UTC m=+51.641152678" watchObservedRunningTime="2025-06-20 19:43:59.928509777 +0000 UTC m=+51.644135345" Jun 20 19:44:00.415908 containerd[1512]: time="2025-06-20T19:44:00.414661223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bbd87cbf5-cz57f,Uid:4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f,Namespace:calico-system,Attempt:0,}" Jun 20 19:44:00.541207 containerd[1512]: time="2025-06-20T19:44:00.540262490Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d\" id:\"b3b65b12cde9d5ed7344c3e48ce9b4a30250841d90ea3e45acd6aea94cdb0056\" pid:3916 exit_status:1 exited_at:{seconds:1750448640 nanos:537528849}" Jun 20 19:44:00.548654 kubelet[2775]: I0620 19:44:00.548612 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-whisker-ca-bundle\") pod \"30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b\" (UID: \"30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b\") " Jun 20 19:44:00.548861 kubelet[2775]: I0620 19:44:00.548677 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-whisker-backend-key-pair\") pod \"30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b\" (UID: \"30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b\") " Jun 20 19:44:00.548861 kubelet[2775]: I0620 19:44:00.548711 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch7t4\" (UniqueName: \"kubernetes.io/projected/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-kube-api-access-ch7t4\") pod \"30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b\" (UID: \"30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b\") " Jun 20 19:44:00.553367 kubelet[2775]: I0620 19:44:00.551596 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b" (UID: "30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:44:00.644667 systemd[1]: var-lib-kubelet-pods-30a05ee6\x2d6dba\x2d44f8\x2d92c5\x2d17d3d6bc6e1b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:44:00.649650 kubelet[2775]: I0620 19:44:00.649586 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-whisker-ca-bundle\") on node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" DevicePath \"\"" Jun 20 19:44:00.651631 kubelet[2775]: I0620 19:44:00.651579 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b" (UID: "30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:44:00.653939 kubelet[2775]: I0620 19:44:00.653838 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-kube-api-access-ch7t4" (OuterVolumeSpecName: "kube-api-access-ch7t4") pod "30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b" (UID: "30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b"). InnerVolumeSpecName "kube-api-access-ch7t4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:44:00.656126 systemd[1]: var-lib-kubelet-pods-30a05ee6\x2d6dba\x2d44f8\x2d92c5\x2d17d3d6bc6e1b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dch7t4.mount: Deactivated successfully. Jun 20 19:44:00.751499 kubelet[2775]: I0620 19:44:00.751350 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-whisker-backend-key-pair\") on node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" DevicePath \"\"" Jun 20 19:44:00.751499 kubelet[2775]: I0620 19:44:00.751398 2775 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ch7t4\" (UniqueName: \"kubernetes.io/projected/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b-kube-api-access-ch7t4\") on node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" DevicePath \"\"" Jun 20 19:44:00.895627 systemd[1]: Removed slice kubepods-besteffort-pod30a05ee6_6dba_44f8_92c5_17d3d6bc6e1b.slice - libcontainer container kubepods-besteffort-pod30a05ee6_6dba_44f8_92c5_17d3d6bc6e1b.slice. Jun 20 19:44:01.112038 kubelet[2775]: I0620 19:44:01.111756 2775 status_manager.go:890] "Failed to get status for pod" podUID="8d937c69-8aca-4953-8065-c34de1c787d9" pod="calico-system/whisker-656c9c5c45-65wq9" err="pods \"whisker-656c9c5c45-65wq9\" is forbidden: User \"system:node:ci-4344-1-0-8-afb8bdccbb.novalocal\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object" Jun 20 19:44:01.114667 kubelet[2775]: W0620 19:44:01.112194 2775 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4344-1-0-8-afb8bdccbb.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object Jun 20 19:44:01.114667 kubelet[2775]: E0620 19:44:01.112294 2775 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4344-1-0-8-afb8bdccbb.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344-1-0-8-afb8bdccbb.novalocal' and this object" logger="UnhandledError" Jun 20 19:44:01.127409 systemd[1]: Created slice kubepods-besteffort-pod8d937c69_8aca_4953_8065_c34de1c787d9.slice - libcontainer container kubepods-besteffort-pod8d937c69_8aca_4953_8065_c34de1c787d9.slice. Jun 20 19:44:01.180791 systemd-networkd[1427]: cali912281287b3: Link UP Jun 20 19:44:01.182054 systemd-networkd[1427]: cali912281287b3: Gained carrier Jun 20 19:44:01.228117 containerd[1512]: 2025-06-20 19:44:00.519 [INFO][3929] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:44:01.228117 containerd[1512]: 2025-06-20 19:44:00.817 [INFO][3929] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0 calico-kube-controllers-5bbd87cbf5- calico-system 4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f 841 0 2025-06-20 19:43:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bbd87cbf5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344-1-0-8-afb8bdccbb.novalocal calico-kube-controllers-5bbd87cbf5-cz57f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali912281287b3 [] [] }} ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Namespace="calico-system" Pod="calico-kube-controllers-5bbd87cbf5-cz57f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-" Jun 20 19:44:01.228117 containerd[1512]: 2025-06-20 19:44:00.817 [INFO][3929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Namespace="calico-system" Pod="calico-kube-controllers-5bbd87cbf5-cz57f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0" Jun 20 19:44:01.228117 containerd[1512]: 2025-06-20 19:44:00.880 [INFO][3950] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" HandleID="k8s-pod-network.1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0" Jun 20 19:44:01.228508 containerd[1512]: 2025-06-20 19:44:00.881 [INFO][3950] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" HandleID="k8s-pod-network.1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000371b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-0-8-afb8bdccbb.novalocal", "pod":"calico-kube-controllers-5bbd87cbf5-cz57f", "timestamp":"2025-06-20 19:44:00.880406491 +0000 UTC"}, Hostname:"ci-4344-1-0-8-afb8bdccbb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:44:01.228508 containerd[1512]: 2025-06-20 19:44:00.883 [INFO][3950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:01.228508 containerd[1512]: 2025-06-20 19:44:00.883 [INFO][3950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:01.228508 containerd[1512]: 2025-06-20 19:44:00.883 [INFO][3950] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-8-afb8bdccbb.novalocal' Jun 20 19:44:01.228508 containerd[1512]: 2025-06-20 19:44:00.911 [INFO][3950] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.228508 containerd[1512]: 2025-06-20 19:44:00.929 [INFO][3950] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.228508 containerd[1512]: 2025-06-20 19:44:01.021 [INFO][3950] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.228508 containerd[1512]: 2025-06-20 19:44:01.026 [INFO][3950] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.228508 containerd[1512]: 2025-06-20 19:44:01.039 [INFO][3950] ipam/ipam.go 208: Affinity has not been confirmed - attempt to confirm it cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.228845 containerd[1512]: 2025-06-20 19:44:01.055 [ERROR][3950] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(ci-4344-1-0-8-afb8bdccbb.novalocal-192-168-25-64-26) Name="ci-4344-1-0-8-afb8bdccbb.novalocal-192-168-25-64-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"ci-4344-1-0-8-afb8bdccbb.novalocal-192-168-25-64-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"pending", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", Type:"host", CIDR:"192.168.25.64/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "ci-4344-1-0-8-afb8bdccbb.novalocal-192-168-25-64-26": the object has been modified; please apply your changes to the latest version and try again Jun 20 19:44:01.228845 containerd[1512]: 2025-06-20 19:44:01.055 [WARNING][3950] ipam/ipam.go 212: Error marking affinity as pending as part of confirmation process cidr=192.168.25.64/26 error=update conflict: BlockAffinity(ci-4344-1-0-8-afb8bdccbb.novalocal-192-168-25-64-26) host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.228845 containerd[1512]: 2025-06-20 19:44:01.055 [INFO][3950] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.228845 containerd[1512]: 2025-06-20 19:44:01.065 [INFO][3950] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.228845 containerd[1512]: 2025-06-20 19:44:01.070 [INFO][3950] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.229040 containerd[1512]: 2025-06-20 19:44:01.070 [INFO][3950] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.229040 containerd[1512]: 2025-06-20 19:44:01.076 [INFO][3950] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075 Jun 20 19:44:01.229040 containerd[1512]: 2025-06-20 19:44:01.098 [INFO][3950] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.229040 containerd[1512]: 2025-06-20 19:44:01.133 [INFO][3950] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.64/26] block=192.168.25.64/26 handle="k8s-pod-network.1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.229040 containerd[1512]: 2025-06-20 19:44:01.133 [INFO][3950] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.64/26] handle="k8s-pod-network.1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.229040 containerd[1512]: 2025-06-20 19:44:01.133 [INFO][3950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:01.229040 containerd[1512]: 2025-06-20 19:44:01.134 [INFO][3950] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.64/26] IPv6=[] ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" HandleID="k8s-pod-network.1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0" Jun 20 19:44:01.232062 containerd[1512]: 2025-06-20 19:44:01.140 [INFO][3929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Namespace="calico-system" Pod="calico-kube-controllers-5bbd87cbf5-cz57f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0", GenerateName:"calico-kube-controllers-5bbd87cbf5-", Namespace:"calico-system", SelfLink:"", UID:"4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bbd87cbf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"", Pod:"calico-kube-controllers-5bbd87cbf5-cz57f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali912281287b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:01.232150 containerd[1512]: 2025-06-20 19:44:01.140 [INFO][3929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.64/32] ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Namespace="calico-system" Pod="calico-kube-controllers-5bbd87cbf5-cz57f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0" Jun 20 19:44:01.232150 containerd[1512]: 2025-06-20 19:44:01.140 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali912281287b3 ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Namespace="calico-system" Pod="calico-kube-controllers-5bbd87cbf5-cz57f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0" Jun 20 19:44:01.232150 containerd[1512]: 2025-06-20 19:44:01.187 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Namespace="calico-system" Pod="calico-kube-controllers-5bbd87cbf5-cz57f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0" Jun 20 19:44:01.234543 containerd[1512]: 2025-06-20 19:44:01.190 [INFO][3929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Namespace="calico-system" Pod="calico-kube-controllers-5bbd87cbf5-cz57f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0", GenerateName:"calico-kube-controllers-5bbd87cbf5-", Namespace:"calico-system", SelfLink:"", UID:"4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bbd87cbf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075", Pod:"calico-kube-controllers-5bbd87cbf5-cz57f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali912281287b3", MAC:"b2:cb:19:f7:2b:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:01.234733 containerd[1512]: 2025-06-20 19:44:01.218 [INFO][3929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" Namespace="calico-system" Pod="calico-kube-controllers-5bbd87cbf5-cz57f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--kube--controllers--5bbd87cbf5--cz57f-eth0" Jun 20 19:44:01.249294 containerd[1512]: time="2025-06-20T19:44:01.249233631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d\" id:\"42b33d12e86789ceb5400ab0243ac517b7c681fa34f316acfc1b5fed48be3720\" pid:3969 exit_status:1 exited_at:{seconds:1750448641 nanos:248487413}" Jun 20 19:44:01.260212 kubelet[2775]: I0620 19:44:01.259755 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k4z7\" (UniqueName: \"kubernetes.io/projected/8d937c69-8aca-4953-8065-c34de1c787d9-kube-api-access-4k4z7\") pod \"whisker-656c9c5c45-65wq9\" (UID: \"8d937c69-8aca-4953-8065-c34de1c787d9\") " pod="calico-system/whisker-656c9c5c45-65wq9" Jun 20 19:44:01.260212 kubelet[2775]: I0620 19:44:01.259854 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d937c69-8aca-4953-8065-c34de1c787d9-whisker-ca-bundle\") pod \"whisker-656c9c5c45-65wq9\" (UID: \"8d937c69-8aca-4953-8065-c34de1c787d9\") " pod="calico-system/whisker-656c9c5c45-65wq9" Jun 20 19:44:01.260212 kubelet[2775]: I0620 19:44:01.259951 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8d937c69-8aca-4953-8065-c34de1c787d9-whisker-backend-key-pair\") pod \"whisker-656c9c5c45-65wq9\" (UID: \"8d937c69-8aca-4953-8065-c34de1c787d9\") " pod="calico-system/whisker-656c9c5c45-65wq9" Jun 20 19:44:01.345223 containerd[1512]: time="2025-06-20T19:44:01.345133964Z" level=info msg="connecting to shim 1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075" address="unix:///run/containerd/s/93e955824273fd2e47e53167a173728d20d59a62a8b2bd93b1102bec6e37002b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:44:01.408562 systemd[1]: Started cri-containerd-1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075.scope - libcontainer container 1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075. Jun 20 19:44:01.415629 containerd[1512]: time="2025-06-20T19:44:01.415400414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf54c67d-bm2gt,Uid:f88145ed-7b64-4352-a338-c1198720e562,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:44:01.416157 containerd[1512]: time="2025-06-20T19:44:01.415947930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vb48f,Uid:39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd,Namespace:calico-system,Attempt:0,}" Jun 20 19:44:01.417686 containerd[1512]: time="2025-06-20T19:44:01.417608481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mq9jd,Uid:408b62a3-728a-42d2-86d9-4db693b39e20,Namespace:kube-system,Attempt:0,}" Jun 20 19:44:01.750742 containerd[1512]: time="2025-06-20T19:44:01.749748845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bbd87cbf5-cz57f,Uid:4d816cc2-af80-4ebc-a21e-c0daa7f3bd4f,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075\"" Jun 20 19:44:01.757201 containerd[1512]: time="2025-06-20T19:44:01.757122903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:44:01.816347 systemd-networkd[1427]: cali4b23454f610: Link UP Jun 20 19:44:01.816739 systemd-networkd[1427]: cali4b23454f610: Gained carrier Jun 20 19:44:01.837167 containerd[1512]: 2025-06-20 19:44:01.514 [INFO][4039] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:44:01.837167 containerd[1512]: 2025-06-20 19:44:01.553 [INFO][4039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0 csi-node-driver- calico-system 39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd 702 0 2025-06-20 19:43:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344-1-0-8-afb8bdccbb.novalocal csi-node-driver-vb48f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4b23454f610 [] [] }} ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Namespace="calico-system" Pod="csi-node-driver-vb48f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-" Jun 20 19:44:01.837167 containerd[1512]: 2025-06-20 19:44:01.553 [INFO][4039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Namespace="calico-system" Pod="csi-node-driver-vb48f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0" Jun 20 19:44:01.837167 containerd[1512]: 2025-06-20 19:44:01.709 [INFO][4070] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" HandleID="k8s-pod-network.59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0" Jun 20 19:44:01.837684 containerd[1512]: 2025-06-20 19:44:01.710 [INFO][4070] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" HandleID="k8s-pod-network.59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005de040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-0-8-afb8bdccbb.novalocal", "pod":"csi-node-driver-vb48f", "timestamp":"2025-06-20 19:44:01.709325677 +0000 UTC"}, Hostname:"ci-4344-1-0-8-afb8bdccbb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:44:01.837684 containerd[1512]: 2025-06-20 19:44:01.710 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:01.837684 containerd[1512]: 2025-06-20 19:44:01.710 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:01.837684 containerd[1512]: 2025-06-20 19:44:01.710 [INFO][4070] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-8-afb8bdccbb.novalocal' Jun 20 19:44:01.837684 containerd[1512]: 2025-06-20 19:44:01.730 [INFO][4070] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.837684 containerd[1512]: 2025-06-20 19:44:01.742 [INFO][4070] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.837684 containerd[1512]: 2025-06-20 19:44:01.759 [INFO][4070] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.837684 containerd[1512]: 2025-06-20 19:44:01.769 [INFO][4070] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.837684 containerd[1512]: 2025-06-20 19:44:01.775 [INFO][4070] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.838137 containerd[1512]: 2025-06-20 19:44:01.775 [INFO][4070] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.838137 containerd[1512]: 2025-06-20 19:44:01.781 [INFO][4070] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d Jun 20 19:44:01.838137 containerd[1512]: 2025-06-20 19:44:01.791 [INFO][4070] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.838137 containerd[1512]: 2025-06-20 19:44:01.799 [INFO][4070] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.66/26] block=192.168.25.64/26 handle="k8s-pod-network.59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.838137 containerd[1512]: 2025-06-20 19:44:01.799 [INFO][4070] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.66/26] handle="k8s-pod-network.59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.838137 containerd[1512]: 2025-06-20 19:44:01.799 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:01.838137 containerd[1512]: 2025-06-20 19:44:01.799 [INFO][4070] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.66/26] IPv6=[] ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" HandleID="k8s-pod-network.59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0" Jun 20 19:44:01.839621 containerd[1512]: 2025-06-20 19:44:01.802 [INFO][4039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Namespace="calico-system" Pod="csi-node-driver-vb48f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"", Pod:"csi-node-driver-vb48f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b23454f610", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:01.839746 containerd[1512]: 2025-06-20 19:44:01.802 [INFO][4039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.66/32] ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Namespace="calico-system" Pod="csi-node-driver-vb48f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0" Jun 20 19:44:01.839746 containerd[1512]: 2025-06-20 19:44:01.802 [INFO][4039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b23454f610 ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Namespace="calico-system" Pod="csi-node-driver-vb48f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0" Jun 20 19:44:01.839746 containerd[1512]: 2025-06-20 19:44:01.817 [INFO][4039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Namespace="calico-system" Pod="csi-node-driver-vb48f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0" Jun 20 19:44:01.839911 containerd[1512]: 2025-06-20 19:44:01.817 [INFO][4039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Namespace="calico-system" Pod="csi-node-driver-vb48f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d", Pod:"csi-node-driver-vb48f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b23454f610", MAC:"6a:10:af:dc:44:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:01.840000 containerd[1512]: 2025-06-20 19:44:01.834 [INFO][4039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" Namespace="calico-system" Pod="csi-node-driver-vb48f" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-csi--node--driver--vb48f-eth0" Jun 20 19:44:01.894952 containerd[1512]: time="2025-06-20T19:44:01.894884608Z" level=info msg="connecting to shim 59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d" address="unix:///run/containerd/s/5d67fe74056534e9852f7c0e0c87950189e032c0d03b3ad48c79bf4c3115c36b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:44:01.905902 systemd-networkd[1427]: calib3b45611918: Link UP Jun 20 19:44:01.907911 systemd-networkd[1427]: calib3b45611918: Gained carrier Jun 20 19:44:01.948608 containerd[1512]: 2025-06-20 19:44:01.576 [INFO][4054] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:44:01.948608 containerd[1512]: 2025-06-20 19:44:01.605 [INFO][4054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0 coredns-668d6bf9bc- kube-system 408b62a3-728a-42d2-86d9-4db693b39e20 831 0 2025-06-20 19:43:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-1-0-8-afb8bdccbb.novalocal coredns-668d6bf9bc-mq9jd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib3b45611918 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Namespace="kube-system" Pod="coredns-668d6bf9bc-mq9jd" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-" Jun 20 19:44:01.948608 containerd[1512]: 2025-06-20 19:44:01.606 [INFO][4054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Namespace="kube-system" Pod="coredns-668d6bf9bc-mq9jd" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0" Jun 20 19:44:01.948608 containerd[1512]: 2025-06-20 19:44:01.748 [INFO][4080] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" HandleID="k8s-pod-network.57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0" Jun 20 19:44:01.949051 containerd[1512]: 2025-06-20 19:44:01.754 [INFO][4080] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" HandleID="k8s-pod-network.57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7890), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-1-0-8-afb8bdccbb.novalocal", "pod":"coredns-668d6bf9bc-mq9jd", "timestamp":"2025-06-20 19:44:01.74851366 +0000 UTC"}, Hostname:"ci-4344-1-0-8-afb8bdccbb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:44:01.949051 containerd[1512]: 2025-06-20 19:44:01.756 [INFO][4080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:01.949051 containerd[1512]: 2025-06-20 19:44:01.800 [INFO][4080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:01.949051 containerd[1512]: 2025-06-20 19:44:01.800 [INFO][4080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-8-afb8bdccbb.novalocal' Jun 20 19:44:01.949051 containerd[1512]: 2025-06-20 19:44:01.830 [INFO][4080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.949051 containerd[1512]: 2025-06-20 19:44:01.852 [INFO][4080] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.949051 containerd[1512]: 2025-06-20 19:44:01.865 [INFO][4080] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.949051 containerd[1512]: 2025-06-20 19:44:01.868 [INFO][4080] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.949051 containerd[1512]: 2025-06-20 19:44:01.871 [INFO][4080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.950221 containerd[1512]: 2025-06-20 19:44:01.872 [INFO][4080] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.950221 containerd[1512]: 2025-06-20 19:44:01.873 [INFO][4080] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946 Jun 20 19:44:01.950221 containerd[1512]: 2025-06-20 19:44:01.884 [INFO][4080] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.950221 containerd[1512]: 2025-06-20 19:44:01.897 [INFO][4080] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.67/26] block=192.168.25.64/26 handle="k8s-pod-network.57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.950221 containerd[1512]: 2025-06-20 19:44:01.897 [INFO][4080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.67/26] handle="k8s-pod-network.57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:01.950221 containerd[1512]: 2025-06-20 19:44:01.897 [INFO][4080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:01.950221 containerd[1512]: 2025-06-20 19:44:01.897 [INFO][4080] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.67/26] IPv6=[] ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" HandleID="k8s-pod-network.57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0" Jun 20 19:44:01.950497 containerd[1512]: 2025-06-20 19:44:01.901 [INFO][4054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Namespace="kube-system" Pod="coredns-668d6bf9bc-mq9jd" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"408b62a3-728a-42d2-86d9-4db693b39e20", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-mq9jd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3b45611918", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:01.950497 containerd[1512]: 2025-06-20 19:44:01.901 [INFO][4054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.67/32] ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Namespace="kube-system" Pod="coredns-668d6bf9bc-mq9jd" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0" Jun 20 19:44:01.950497 containerd[1512]: 2025-06-20 19:44:01.901 [INFO][4054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3b45611918 ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Namespace="kube-system" Pod="coredns-668d6bf9bc-mq9jd" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0" Jun 20 19:44:01.950497 containerd[1512]: 2025-06-20 19:44:01.908 [INFO][4054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Namespace="kube-system" Pod="coredns-668d6bf9bc-mq9jd" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0" Jun 20 19:44:01.950497 containerd[1512]: 2025-06-20 19:44:01.909 [INFO][4054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Namespace="kube-system" Pod="coredns-668d6bf9bc-mq9jd" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"408b62a3-728a-42d2-86d9-4db693b39e20", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946", Pod:"coredns-668d6bf9bc-mq9jd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3b45611918", MAC:"e6:6f:09:e8:a5:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:01.950497 containerd[1512]: 2025-06-20 19:44:01.940 [INFO][4054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" Namespace="kube-system" Pod="coredns-668d6bf9bc-mq9jd" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--mq9jd-eth0" Jun 20 19:44:01.991526 systemd[1]: Started cri-containerd-59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d.scope - libcontainer container 59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d. Jun 20 19:44:02.042232 systemd-networkd[1427]: cali37fa402a8e6: Link UP Jun 20 19:44:02.043799 systemd-networkd[1427]: cali37fa402a8e6: Gained carrier Jun 20 19:44:02.075049 containerd[1512]: time="2025-06-20T19:44:02.073807103Z" level=info msg="connecting to shim 57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946" address="unix:///run/containerd/s/21ff35946d2dfcddfb1cfb1da742a5b5b978b23ae2012cd427e5e8672e35b181" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:44:02.082877 containerd[1512]: time="2025-06-20T19:44:02.082830304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vb48f,Uid:39f7ce5c-cc5a-4e2a-84b2-7eb435c846cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d\"" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.598 [INFO][4041] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.652 [INFO][4041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0 calico-apiserver-7bcf54c67d- calico-apiserver f88145ed-7b64-4352-a338-c1198720e562 838 0 2025-06-20 19:43:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bcf54c67d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-1-0-8-afb8bdccbb.novalocal calico-apiserver-7bcf54c67d-bm2gt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali37fa402a8e6 [] [] }} ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-bm2gt" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.653 [INFO][4041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-bm2gt" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.784 [INFO][4088] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" HandleID="k8s-pod-network.b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.785 [INFO][4088] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" HandleID="k8s-pod-network.b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000337950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-1-0-8-afb8bdccbb.novalocal", "pod":"calico-apiserver-7bcf54c67d-bm2gt", "timestamp":"2025-06-20 19:44:01.784083602 +0000 UTC"}, Hostname:"ci-4344-1-0-8-afb8bdccbb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.785 [INFO][4088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.897 [INFO][4088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.897 [INFO][4088] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-8-afb8bdccbb.novalocal' Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.935 [INFO][4088] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.951 [INFO][4088] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.970 [INFO][4088] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.975 [INFO][4088] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.984 [INFO][4088] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.987 [INFO][4088] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.990 [INFO][4088] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5 Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:01.999 [INFO][4088] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:02.026 [INFO][4088] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.68/26] block=192.168.25.64/26 handle="k8s-pod-network.b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:02.026 [INFO][4088] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.68/26] handle="k8s-pod-network.b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:02.027 [INFO][4088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:02.086647 containerd[1512]: 2025-06-20 19:44:02.027 [INFO][4088] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.68/26] IPv6=[] ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" HandleID="k8s-pod-network.b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0" Jun 20 19:44:02.089236 containerd[1512]: 2025-06-20 19:44:02.032 [INFO][4041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-bm2gt" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0", GenerateName:"calico-apiserver-7bcf54c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f88145ed-7b64-4352-a338-c1198720e562", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcf54c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"", Pod:"calico-apiserver-7bcf54c67d-bm2gt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37fa402a8e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:02.089236 containerd[1512]: 2025-06-20 19:44:02.032 [INFO][4041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.68/32] ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-bm2gt" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0" Jun 20 19:44:02.089236 containerd[1512]: 2025-06-20 19:44:02.033 [INFO][4041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37fa402a8e6 ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-bm2gt" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0" Jun 20 19:44:02.089236 containerd[1512]: 2025-06-20 19:44:02.059 [INFO][4041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-bm2gt" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0" Jun 20 19:44:02.089236 containerd[1512]: 2025-06-20 19:44:02.059 [INFO][4041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-bm2gt" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0", GenerateName:"calico-apiserver-7bcf54c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f88145ed-7b64-4352-a338-c1198720e562", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcf54c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5", Pod:"calico-apiserver-7bcf54c67d-bm2gt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37fa402a8e6", MAC:"e6:f1:59:cb:97:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:02.089236 containerd[1512]: 2025-06-20 19:44:02.080 [INFO][4041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-bm2gt" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--bm2gt-eth0" Jun 20 19:44:02.121448 systemd[1]: Started cri-containerd-57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946.scope - libcontainer container 57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946. Jun 20 19:44:02.133237 containerd[1512]: time="2025-06-20T19:44:02.132992531Z" level=info msg="connecting to shim b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5" address="unix:///run/containerd/s/da74028ec931b906befcaf17064037ef38ae295892c51ef2d7ed2d363376c387" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:44:02.172338 systemd[1]: Started cri-containerd-b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5.scope - libcontainer container b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5. Jun 20 19:44:02.206817 containerd[1512]: time="2025-06-20T19:44:02.206758407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mq9jd,Uid:408b62a3-728a-42d2-86d9-4db693b39e20,Namespace:kube-system,Attempt:0,} returns sandbox id \"57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946\"" Jun 20 19:44:02.212495 containerd[1512]: time="2025-06-20T19:44:02.212433544Z" level=info msg="CreateContainer within sandbox \"57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:44:02.243727 containerd[1512]: time="2025-06-20T19:44:02.243583527Z" level=info msg="Container 19995ee33e417c16b3732516834a485b7c168f3e9210510344036f8f217a6060: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:02.270706 containerd[1512]: time="2025-06-20T19:44:02.269507505Z" level=info msg="CreateContainer within sandbox \"57e3441e3bcd82246ea5a29ff2751255641fbc376d2181c852a35844f9bc8946\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19995ee33e417c16b3732516834a485b7c168f3e9210510344036f8f217a6060\"" Jun 20 19:44:02.272409 containerd[1512]: time="2025-06-20T19:44:02.272352437Z" level=info msg="StartContainer for \"19995ee33e417c16b3732516834a485b7c168f3e9210510344036f8f217a6060\"" Jun 20 19:44:02.272985 containerd[1512]: time="2025-06-20T19:44:02.272769959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf54c67d-bm2gt,Uid:f88145ed-7b64-4352-a338-c1198720e562,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5\"" Jun 20 19:44:02.274785 containerd[1512]: time="2025-06-20T19:44:02.274125470Z" level=info msg="connecting to shim 19995ee33e417c16b3732516834a485b7c168f3e9210510344036f8f217a6060" address="unix:///run/containerd/s/21ff35946d2dfcddfb1cfb1da742a5b5b978b23ae2012cd427e5e8672e35b181" protocol=ttrpc version=3 Jun 20 19:44:02.306610 systemd[1]: Started cri-containerd-19995ee33e417c16b3732516834a485b7c168f3e9210510344036f8f217a6060.scope - libcontainer container 19995ee33e417c16b3732516834a485b7c168f3e9210510344036f8f217a6060. Jun 20 19:44:02.346482 containerd[1512]: time="2025-06-20T19:44:02.345568783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656c9c5c45-65wq9,Uid:8d937c69-8aca-4953-8065-c34de1c787d9,Namespace:calico-system,Attempt:0,}" Jun 20 19:44:02.387909 containerd[1512]: time="2025-06-20T19:44:02.387868313Z" level=info msg="StartContainer for \"19995ee33e417c16b3732516834a485b7c168f3e9210510344036f8f217a6060\" returns successfully" Jun 20 19:44:02.391057 systemd-networkd[1427]: cali912281287b3: Gained IPv6LL Jun 20 19:44:02.413787 containerd[1512]: time="2025-06-20T19:44:02.413648612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z95lx,Uid:e57eaa74-3032-4619-bb92-dee9a0197bd2,Namespace:kube-system,Attempt:0,}" Jun 20 19:44:02.421672 kubelet[2775]: I0620 19:44:02.420072 2775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b" path="/var/lib/kubelet/pods/30a05ee6-6dba-44f8-92c5-17d3d6bc6e1b/volumes" Jun 20 19:44:02.640760 systemd-networkd[1427]: cali933cfda90d7: Link UP Jun 20 19:44:02.643016 systemd-networkd[1427]: cali933cfda90d7: Gained carrier Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.435 [INFO][4296] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.455 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0 whisker-656c9c5c45- calico-system 8d937c69-8aca-4953-8065-c34de1c787d9 912 0 2025-06-20 19:44:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:656c9c5c45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344-1-0-8-afb8bdccbb.novalocal whisker-656c9c5c45-65wq9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali933cfda90d7 [] [] }} ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Namespace="calico-system" Pod="whisker-656c9c5c45-65wq9" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.455 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Namespace="calico-system" Pod="whisker-656c9c5c45-65wq9" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.503 [INFO][4319] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" HandleID="k8s-pod-network.f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.504 [INFO][4319] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" HandleID="k8s-pod-network.f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-0-8-afb8bdccbb.novalocal", "pod":"whisker-656c9c5c45-65wq9", "timestamp":"2025-06-20 19:44:02.503900597 +0000 UTC"}, Hostname:"ci-4344-1-0-8-afb8bdccbb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.504 [INFO][4319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.505 [INFO][4319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.505 [INFO][4319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-8-afb8bdccbb.novalocal' Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.527 [INFO][4319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.537 [INFO][4319] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.546 [INFO][4319] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.549 [INFO][4319] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.553 [INFO][4319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.553 [INFO][4319] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.556 [INFO][4319] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.591 [INFO][4319] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.625 [INFO][4319] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.69/26] block=192.168.25.64/26 handle="k8s-pod-network.f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.625 [INFO][4319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.69/26] handle="k8s-pod-network.f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.625 [INFO][4319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:02.696549 containerd[1512]: 2025-06-20 19:44:02.625 [INFO][4319] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.69/26] IPv6=[] ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" HandleID="k8s-pod-network.f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0" Jun 20 19:44:02.699048 containerd[1512]: 2025-06-20 19:44:02.629 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Namespace="calico-system" Pod="whisker-656c9c5c45-65wq9" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0", GenerateName:"whisker-656c9c5c45-", Namespace:"calico-system", SelfLink:"", UID:"8d937c69-8aca-4953-8065-c34de1c787d9", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"656c9c5c45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"", Pod:"whisker-656c9c5c45-65wq9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali933cfda90d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:02.699048 containerd[1512]: 2025-06-20 19:44:02.629 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.69/32] ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Namespace="calico-system" Pod="whisker-656c9c5c45-65wq9" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0" Jun 20 19:44:02.699048 containerd[1512]: 2025-06-20 19:44:02.629 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali933cfda90d7 ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Namespace="calico-system" Pod="whisker-656c9c5c45-65wq9" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0" Jun 20 19:44:02.699048 containerd[1512]: 2025-06-20 19:44:02.641 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Namespace="calico-system" Pod="whisker-656c9c5c45-65wq9" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0" Jun 20 19:44:02.699048 containerd[1512]: 2025-06-20 19:44:02.646 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Namespace="calico-system" Pod="whisker-656c9c5c45-65wq9" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0", GenerateName:"whisker-656c9c5c45-", Namespace:"calico-system", SelfLink:"", UID:"8d937c69-8aca-4953-8065-c34de1c787d9", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"656c9c5c45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed", Pod:"whisker-656c9c5c45-65wq9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali933cfda90d7", MAC:"2a:65:0e:16:28:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:02.699048 containerd[1512]: 2025-06-20 19:44:02.692 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" Namespace="calico-system" Pod="whisker-656c9c5c45-65wq9" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-whisker--656c9c5c45--65wq9-eth0" Jun 20 19:44:03.094410 systemd-networkd[1427]: cali4b23454f610: Gained IPv6LL Jun 20 19:44:03.166245 kubelet[2775]: I0620 19:44:03.165531 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mq9jd" podStartSLOduration=49.164450448 podStartE2EDuration="49.164450448s" podCreationTimestamp="2025-06-20 19:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:44:03.157964981 +0000 UTC m=+54.873590599" watchObservedRunningTime="2025-06-20 19:44:03.164450448 +0000 UTC m=+54.880076067" Jun 20 19:44:03.363960 containerd[1512]: time="2025-06-20T19:44:03.363651048Z" level=info msg="connecting to shim f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed" address="unix:///run/containerd/s/f7341fd6965377b57e80584f37c0f71f45bbf80ca8c77127baa11cc8933a4e4d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:44:03.418763 containerd[1512]: time="2025-06-20T19:44:03.418496995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-nq7xc,Uid:be112bd7-6812-4441-9f4a-bbae19965424,Namespace:calico-system,Attempt:0,}" Jun 20 19:44:03.433346 containerd[1512]: time="2025-06-20T19:44:03.432794778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f5b5557d-6rptn,Uid:47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:44:03.433346 containerd[1512]: time="2025-06-20T19:44:03.433050457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f5b5557d-j7slj,Uid:807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:44:03.531466 systemd[1]: Started cri-containerd-f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed.scope - libcontainer container f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed. Jun 20 19:44:03.791146 containerd[1512]: time="2025-06-20T19:44:03.791104750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656c9c5c45-65wq9,Uid:8d937c69-8aca-4953-8065-c34de1c787d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed\"" Jun 20 19:44:03.798369 systemd-networkd[1427]: calib3b45611918: Gained IPv6LL Jun 20 19:44:03.799850 systemd-networkd[1427]: cali37fa402a8e6: Gained IPv6LL Jun 20 19:44:03.881933 systemd-networkd[1427]: calic4e766b88d4: Link UP Jun 20 19:44:03.886054 systemd-networkd[1427]: calic4e766b88d4: Gained carrier Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.334 [INFO][4412] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.379 [INFO][4412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0 coredns-668d6bf9bc- kube-system e57eaa74-3032-4619-bb92-dee9a0197bd2 834 0 2025-06-20 19:43:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-1-0-8-afb8bdccbb.novalocal coredns-668d6bf9bc-z95lx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic4e766b88d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-z95lx" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.379 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-z95lx" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.707 [INFO][4449] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" HandleID="k8s-pod-network.ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.714 [INFO][4449] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" HandleID="k8s-pod-network.ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f17a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-1-0-8-afb8bdccbb.novalocal", "pod":"coredns-668d6bf9bc-z95lx", "timestamp":"2025-06-20 19:44:03.70763223 +0000 UTC"}, Hostname:"ci-4344-1-0-8-afb8bdccbb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.714 [INFO][4449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.715 [INFO][4449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.715 [INFO][4449] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-8-afb8bdccbb.novalocal' Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.740 [INFO][4449] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.768 [INFO][4449] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.781 [INFO][4449] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.787 [INFO][4449] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.808 [INFO][4449] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.808 [INFO][4449] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.817 [INFO][4449] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.834 [INFO][4449] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.853 [INFO][4449] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.70/26] block=192.168.25.64/26 handle="k8s-pod-network.ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.853 [INFO][4449] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.70/26] handle="k8s-pod-network.ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.853 [INFO][4449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:03.937701 containerd[1512]: 2025-06-20 19:44:03.853 [INFO][4449] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.70/26] IPv6=[] ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" HandleID="k8s-pod-network.ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0" Jun 20 19:44:03.940132 containerd[1512]: 2025-06-20 19:44:03.863 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-z95lx" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e57eaa74-3032-4619-bb92-dee9a0197bd2", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-z95lx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4e766b88d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:03.940132 containerd[1512]: 2025-06-20 19:44:03.864 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.70/32] ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-z95lx" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0" Jun 20 19:44:03.940132 containerd[1512]: 2025-06-20 19:44:03.864 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4e766b88d4 ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-z95lx" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0" Jun 20 19:44:03.940132 containerd[1512]: 2025-06-20 19:44:03.896 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-z95lx" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0" Jun 20 19:44:03.940132 containerd[1512]: 2025-06-20 19:44:03.902 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-z95lx" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e57eaa74-3032-4619-bb92-dee9a0197bd2", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e", Pod:"coredns-668d6bf9bc-z95lx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4e766b88d4", MAC:"b6:4e:f8:05:63:dd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:03.940132 containerd[1512]: 2025-06-20 19:44:03.931 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-z95lx" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-coredns--668d6bf9bc--z95lx-eth0" Jun 20 19:44:04.026767 systemd-networkd[1427]: cali85c828e663b: Link UP Jun 20 19:44:04.029279 systemd-networkd[1427]: cali85c828e663b: Gained carrier Jun 20 19:44:04.038481 containerd[1512]: time="2025-06-20T19:44:04.038431804Z" level=info msg="connecting to shim ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e" address="unix:///run/containerd/s/ce4575f4831d9cd1838ba2c4126ea5114b65a78d0618f5a4ccdf1ef00d5bde41" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.667 [INFO][4480] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.710 [INFO][4480] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0 calico-apiserver-8f5b5557d- calico-apiserver 807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8 843 0 2025-06-20 19:43:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f5b5557d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-1-0-8-afb8bdccbb.novalocal calico-apiserver-8f5b5557d-j7slj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali85c828e663b [] [] }} ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-j7slj" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.711 [INFO][4480] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-j7slj" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.878 [INFO][4548] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.878 [INFO][4548] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-1-0-8-afb8bdccbb.novalocal", "pod":"calico-apiserver-8f5b5557d-j7slj", "timestamp":"2025-06-20 19:44:03.878288879 +0000 UTC"}, Hostname:"ci-4344-1-0-8-afb8bdccbb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.878 [INFO][4548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.878 [INFO][4548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.878 [INFO][4548] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-8-afb8bdccbb.novalocal' Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.896 [INFO][4548] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.908 [INFO][4548] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.934 [INFO][4548] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.940 [INFO][4548] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.954 [INFO][4548] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.954 [INFO][4548] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.958 [INFO][4548] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597 Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.971 [INFO][4548] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.986 [INFO][4548] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.71/26] block=192.168.25.64/26 handle="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.987 [INFO][4548] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.71/26] handle="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.987 [INFO][4548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:04.070113 containerd[1512]: 2025-06-20 19:44:03.987 [INFO][4548] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.71/26] IPv6=[] ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:44:04.071571 containerd[1512]: 2025-06-20 19:44:03.993 [INFO][4480] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-j7slj" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0", GenerateName:"calico-apiserver-8f5b5557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f5b5557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"", Pod:"calico-apiserver-8f5b5557d-j7slj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85c828e663b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:04.071571 containerd[1512]: 2025-06-20 19:44:03.995 [INFO][4480] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.71/32] ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-j7slj" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:44:04.071571 containerd[1512]: 2025-06-20 19:44:03.997 [INFO][4480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85c828e663b ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-j7slj" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:44:04.071571 containerd[1512]: 2025-06-20 19:44:04.031 [INFO][4480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-j7slj" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:44:04.071571 containerd[1512]: 2025-06-20 19:44:04.034 [INFO][4480] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-j7slj" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0", GenerateName:"calico-apiserver-8f5b5557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f5b5557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597", Pod:"calico-apiserver-8f5b5557d-j7slj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85c828e663b", MAC:"b2:e1:fb:f1:03:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:04.071571 containerd[1512]: 2025-06-20 19:44:04.061 [INFO][4480] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-j7slj" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:44:04.105019 systemd[1]: Started cri-containerd-ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e.scope - libcontainer container ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e. Jun 20 19:44:04.117160 systemd-networkd[1427]: cali6d1978fc439: Link UP Jun 20 19:44:04.117510 systemd-networkd[1427]: cali6d1978fc439: Gained carrier Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:03.783 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0 goldmane-5bd85449d4- calico-system be112bd7-6812-4441-9f4a-bbae19965424 845 0 2025-06-20 19:43:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344-1-0-8-afb8bdccbb.novalocal goldmane-5bd85449d4-nq7xc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6d1978fc439 [] [] }} ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Namespace="calico-system" Pod="goldmane-5bd85449d4-nq7xc" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:03.784 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Namespace="calico-system" Pod="goldmane-5bd85449d4-nq7xc" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:03.928 [INFO][4559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" HandleID="k8s-pod-network.87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:03.929 [INFO][4559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" HandleID="k8s-pod-network.87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000231760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-0-8-afb8bdccbb.novalocal", "pod":"goldmane-5bd85449d4-nq7xc", "timestamp":"2025-06-20 19:44:03.928070542 +0000 UTC"}, Hostname:"ci-4344-1-0-8-afb8bdccbb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:03.930 [INFO][4559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:03.987 [INFO][4559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:03.988 [INFO][4559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-8-afb8bdccbb.novalocal' Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.011 [INFO][4559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.027 [INFO][4559] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.043 [INFO][4559] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.049 [INFO][4559] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.057 [INFO][4559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.057 [INFO][4559] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.062 [INFO][4559] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.081 [INFO][4559] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.097 [INFO][4559] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.72/26] block=192.168.25.64/26 handle="k8s-pod-network.87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.097 [INFO][4559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.72/26] handle="k8s-pod-network.87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.097 [INFO][4559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:04.166385 containerd[1512]: 2025-06-20 19:44:04.098 [INFO][4559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.72/26] IPv6=[] ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" HandleID="k8s-pod-network.87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0" Jun 20 19:44:04.167723 containerd[1512]: 2025-06-20 19:44:04.111 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Namespace="calico-system" Pod="goldmane-5bd85449d4-nq7xc" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"be112bd7-6812-4441-9f4a-bbae19965424", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"", Pod:"goldmane-5bd85449d4-nq7xc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d1978fc439", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:04.167723 containerd[1512]: 2025-06-20 19:44:04.111 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.72/32] ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Namespace="calico-system" Pod="goldmane-5bd85449d4-nq7xc" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0" Jun 20 19:44:04.167723 containerd[1512]: 2025-06-20 19:44:04.111 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d1978fc439 ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Namespace="calico-system" Pod="goldmane-5bd85449d4-nq7xc" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0" Jun 20 19:44:04.167723 containerd[1512]: 2025-06-20 19:44:04.115 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Namespace="calico-system" Pod="goldmane-5bd85449d4-nq7xc" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0" Jun 20 19:44:04.167723 containerd[1512]: 2025-06-20 19:44:04.116 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Namespace="calico-system" Pod="goldmane-5bd85449d4-nq7xc" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"be112bd7-6812-4441-9f4a-bbae19965424", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b", Pod:"goldmane-5bd85449d4-nq7xc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d1978fc439", MAC:"92:0a:cd:62:53:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:04.167723 containerd[1512]: 2025-06-20 19:44:04.163 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" Namespace="calico-system" Pod="goldmane-5bd85449d4-nq7xc" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-goldmane--5bd85449d4--nq7xc-eth0" Jun 20 19:44:04.171668 containerd[1512]: time="2025-06-20T19:44:04.171619492Z" level=info msg="connecting to shim 41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" address="unix:///run/containerd/s/a78eee3c50c1f581041f8a489742d89d5fc9bb1783a4c1744f1a555decc16269" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:44:04.234480 systemd[1]: Started cri-containerd-41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597.scope - libcontainer container 41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597. Jun 20 19:44:04.238032 containerd[1512]: time="2025-06-20T19:44:04.237952879Z" level=info msg="connecting to shim 87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b" address="unix:///run/containerd/s/85f00f0ed390b3505dfc2100f5b1b35c72e36167abdc418aca2317651ca3aa81" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:44:04.279295 systemd-networkd[1427]: calif5e997cd367: Link UP Jun 20 19:44:04.281303 systemd-networkd[1427]: calif5e997cd367: Gained carrier Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:03.812 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0 calico-apiserver-8f5b5557d- calico-apiserver 47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0 844 0 2025-06-20 19:43:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f5b5557d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-1-0-8-afb8bdccbb.novalocal calico-apiserver-8f5b5557d-6rptn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif5e997cd367 [] [] }} ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-6rptn" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:03.817 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-6rptn" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:03.970 [INFO][4564] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:03.971 [INFO][4564] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305de0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-1-0-8-afb8bdccbb.novalocal", "pod":"calico-apiserver-8f5b5557d-6rptn", "timestamp":"2025-06-20 19:44:03.969998191 +0000 UTC"}, Hostname:"ci-4344-1-0-8-afb8bdccbb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:03.972 [INFO][4564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.097 [INFO][4564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.098 [INFO][4564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-8-afb8bdccbb.novalocal' Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.119 [INFO][4564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.151 [INFO][4564] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.172 [INFO][4564] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.179 [INFO][4564] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.188 [INFO][4564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.189 [INFO][4564] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.199 [INFO][4564] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738 Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.212 [INFO][4564] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.244 [INFO][4564] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.73/26] block=192.168.25.64/26 handle="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.245 [INFO][4564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.73/26] handle="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.246 [INFO][4564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:04.338963 containerd[1512]: 2025-06-20 19:44:04.246 [INFO][4564] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.73/26] IPv6=[] ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:44:04.342542 containerd[1512]: 2025-06-20 19:44:04.268 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-6rptn" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0", GenerateName:"calico-apiserver-8f5b5557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f5b5557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"", Pod:"calico-apiserver-8f5b5557d-6rptn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5e997cd367", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:04.342542 containerd[1512]: 2025-06-20 19:44:04.268 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.73/32] ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-6rptn" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:44:04.342542 containerd[1512]: 2025-06-20 19:44:04.268 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5e997cd367 ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-6rptn" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:44:04.342542 containerd[1512]: 2025-06-20 19:44:04.281 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-6rptn" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:44:04.342542 containerd[1512]: 2025-06-20 19:44:04.295 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-6rptn" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0", GenerateName:"calico-apiserver-8f5b5557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 43, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f5b5557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738", Pod:"calico-apiserver-8f5b5557d-6rptn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5e997cd367", MAC:"de:8e:e2:57:4e:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:04.342542 containerd[1512]: 2025-06-20 19:44:04.318 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Namespace="calico-apiserver" Pod="calico-apiserver-8f5b5557d-6rptn" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:44:04.351974 containerd[1512]: time="2025-06-20T19:44:04.351869444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z95lx,Uid:e57eaa74-3032-4619-bb92-dee9a0197bd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e\"" Jun 20 19:44:04.373909 containerd[1512]: time="2025-06-20T19:44:04.373852482Z" level=info msg="CreateContainer within sandbox \"ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:44:04.389410 systemd[1]: Started cri-containerd-87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b.scope - libcontainer container 87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b. Jun 20 19:44:04.438388 systemd-networkd[1427]: cali933cfda90d7: Gained IPv6LL Jun 20 19:44:04.440318 containerd[1512]: time="2025-06-20T19:44:04.439150336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f5b5557d-j7slj,Uid:807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\"" Jun 20 19:44:04.458989 containerd[1512]: time="2025-06-20T19:44:04.458937589Z" level=info msg="Container 421c2dd070442742da4eed746263ede012fb135dec049df6cd4fc46aee1f75fb: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:04.468488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424482320.mount: Deactivated successfully. Jun 20 19:44:04.493272 containerd[1512]: time="2025-06-20T19:44:04.493054618Z" level=info msg="CreateContainer within sandbox \"ec4db73a1e2e5bb0b98c9cafc1940ff23529d62467e1cb0bdee5e779fc967b0e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"421c2dd070442742da4eed746263ede012fb135dec049df6cd4fc46aee1f75fb\"" Jun 20 19:44:04.493972 containerd[1512]: time="2025-06-20T19:44:04.493937213Z" level=info msg="connecting to shim 50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" address="unix:///run/containerd/s/d2e3e52c4f10d534bac0a51a81bed69ea3f471514bacc73d882dee740524127c" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:44:04.501085 containerd[1512]: time="2025-06-20T19:44:04.500788329Z" level=info msg="StartContainer for \"421c2dd070442742da4eed746263ede012fb135dec049df6cd4fc46aee1f75fb\"" Jun 20 19:44:04.505217 containerd[1512]: time="2025-06-20T19:44:04.505154435Z" level=info msg="connecting to shim 421c2dd070442742da4eed746263ede012fb135dec049df6cd4fc46aee1f75fb" address="unix:///run/containerd/s/ce4575f4831d9cd1838ba2c4126ea5114b65a78d0618f5a4ccdf1ef00d5bde41" protocol=ttrpc version=3 Jun 20 19:44:04.536489 systemd[1]: Started cri-containerd-50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738.scope - libcontainer container 50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738. Jun 20 19:44:04.597505 systemd[1]: Started cri-containerd-421c2dd070442742da4eed746263ede012fb135dec049df6cd4fc46aee1f75fb.scope - libcontainer container 421c2dd070442742da4eed746263ede012fb135dec049df6cd4fc46aee1f75fb. Jun 20 19:44:04.612449 containerd[1512]: time="2025-06-20T19:44:04.612365287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-nq7xc,Uid:be112bd7-6812-4441-9f4a-bbae19965424,Namespace:calico-system,Attempt:0,} returns sandbox id \"87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b\"" Jun 20 19:44:04.687538 containerd[1512]: time="2025-06-20T19:44:04.687486812Z" level=info msg="StartContainer for \"421c2dd070442742da4eed746263ede012fb135dec049df6cd4fc46aee1f75fb\" returns successfully" Jun 20 19:44:04.781611 containerd[1512]: time="2025-06-20T19:44:04.781549410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f5b5557d-6rptn,Uid:47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\"" Jun 20 19:44:04.859148 systemd-networkd[1427]: vxlan.calico: Link UP Jun 20 19:44:04.859157 systemd-networkd[1427]: vxlan.calico: Gained carrier Jun 20 19:44:04.957442 kubelet[2775]: I0620 19:44:04.957313 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z95lx" podStartSLOduration=50.957246264 podStartE2EDuration="50.957246264s" podCreationTimestamp="2025-06-20 19:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:44:04.954450623 +0000 UTC m=+56.670076201" watchObservedRunningTime="2025-06-20 19:44:04.957246264 +0000 UTC m=+56.672871842" Jun 20 19:44:05.206919 systemd-networkd[1427]: calic4e766b88d4: Gained IPv6LL Jun 20 19:44:05.846605 systemd-networkd[1427]: calif5e997cd367: Gained IPv6LL Jun 20 19:44:05.974507 systemd-networkd[1427]: cali85c828e663b: Gained IPv6LL Jun 20 19:44:06.167321 systemd-networkd[1427]: cali6d1978fc439: Gained IPv6LL Jun 20 19:44:06.806402 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Jun 20 19:44:08.371598 containerd[1512]: time="2025-06-20T19:44:08.371467619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:08.373202 containerd[1512]: time="2025-06-20T19:44:08.372899155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 20 19:44:08.376160 containerd[1512]: time="2025-06-20T19:44:08.376099170Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:08.378916 containerd[1512]: time="2025-06-20T19:44:08.378862444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:08.380866 containerd[1512]: time="2025-06-20T19:44:08.380476012Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 6.62327849s" Jun 20 19:44:08.380866 containerd[1512]: time="2025-06-20T19:44:08.380561493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 20 19:44:08.382471 containerd[1512]: time="2025-06-20T19:44:08.382447812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:44:08.404134 containerd[1512]: time="2025-06-20T19:44:08.404098257Z" level=info msg="CreateContainer within sandbox \"1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:44:08.427822 containerd[1512]: time="2025-06-20T19:44:08.427775314Z" level=info msg="Container 130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:08.497671 containerd[1512]: time="2025-06-20T19:44:08.497558846Z" level=info msg="CreateContainer within sandbox \"1d4e906a72c00de8cb9caab02acb14fa5336ffa91fac0484fc2d6c117adfe075\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352\"" Jun 20 19:44:08.501120 containerd[1512]: time="2025-06-20T19:44:08.500365221Z" level=info msg="StartContainer for \"130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352\"" Jun 20 19:44:08.504254 containerd[1512]: time="2025-06-20T19:44:08.503938397Z" level=info msg="connecting to shim 130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352" address="unix:///run/containerd/s/93e955824273fd2e47e53167a173728d20d59a62a8b2bd93b1102bec6e37002b" protocol=ttrpc version=3 Jun 20 19:44:08.549443 systemd[1]: Started cri-containerd-130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352.scope - libcontainer container 130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352. Jun 20 19:44:08.627972 containerd[1512]: time="2025-06-20T19:44:08.627842309Z" level=info msg="StartContainer for \"130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352\" returns successfully" Jun 20 19:44:09.082038 containerd[1512]: time="2025-06-20T19:44:09.081945874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352\" id:\"4795ba6ffd82607d2cad16973663af0d85518ff410130058cb3968a8997ce813\" pid:4965 exited_at:{seconds:1750448649 nanos:81310661}" Jun 20 19:44:09.103791 kubelet[2775]: I0620 19:44:09.103261 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bbd87cbf5-cz57f" podStartSLOduration=32.477670059 podStartE2EDuration="39.103107007s" podCreationTimestamp="2025-06-20 19:43:30 +0000 UTC" firstStartedPulling="2025-06-20 19:44:01.756548727 +0000 UTC m=+53.472174295" lastFinishedPulling="2025-06-20 19:44:08.381985675 +0000 UTC m=+60.097611243" observedRunningTime="2025-06-20 19:44:09.056549291 +0000 UTC m=+60.772174880" watchObservedRunningTime="2025-06-20 19:44:09.103107007 +0000 UTC m=+60.818732575" Jun 20 19:44:10.716236 containerd[1512]: time="2025-06-20T19:44:10.715858443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:10.718059 containerd[1512]: time="2025-06-20T19:44:10.717943908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 20 19:44:10.719869 containerd[1512]: time="2025-06-20T19:44:10.719535485Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:10.722744 containerd[1512]: time="2025-06-20T19:44:10.722705596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:10.723494 containerd[1512]: time="2025-06-20T19:44:10.723457317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 2.340774263s" Jun 20 19:44:10.723612 containerd[1512]: time="2025-06-20T19:44:10.723584876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 20 19:44:10.726171 containerd[1512]: time="2025-06-20T19:44:10.726143619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:44:10.728491 containerd[1512]: time="2025-06-20T19:44:10.728444649Z" level=info msg="CreateContainer within sandbox \"59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:44:10.755770 containerd[1512]: time="2025-06-20T19:44:10.754277667Z" level=info msg="Container d545d9dd22c0ba4665c08e88810cc0f22ae9114c676496d32628595bff5bed37: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:10.783831 containerd[1512]: time="2025-06-20T19:44:10.783738215Z" level=info msg="CreateContainer within sandbox \"59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d545d9dd22c0ba4665c08e88810cc0f22ae9114c676496d32628595bff5bed37\"" Jun 20 19:44:10.786278 containerd[1512]: time="2025-06-20T19:44:10.784925384Z" level=info msg="StartContainer for \"d545d9dd22c0ba4665c08e88810cc0f22ae9114c676496d32628595bff5bed37\"" Jun 20 19:44:10.788718 containerd[1512]: time="2025-06-20T19:44:10.788613185Z" level=info msg="connecting to shim d545d9dd22c0ba4665c08e88810cc0f22ae9114c676496d32628595bff5bed37" address="unix:///run/containerd/s/5d67fe74056534e9852f7c0e0c87950189e032c0d03b3ad48c79bf4c3115c36b" protocol=ttrpc version=3 Jun 20 19:44:10.826357 systemd[1]: Started cri-containerd-d545d9dd22c0ba4665c08e88810cc0f22ae9114c676496d32628595bff5bed37.scope - libcontainer container d545d9dd22c0ba4665c08e88810cc0f22ae9114c676496d32628595bff5bed37. Jun 20 19:44:10.900326 containerd[1512]: time="2025-06-20T19:44:10.899675743Z" level=info msg="StartContainer for \"d545d9dd22c0ba4665c08e88810cc0f22ae9114c676496d32628595bff5bed37\" returns successfully" Jun 20 19:44:15.251161 containerd[1512]: time="2025-06-20T19:44:15.250330122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:15.251736 containerd[1512]: time="2025-06-20T19:44:15.251705425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 20 19:44:15.252981 containerd[1512]: time="2025-06-20T19:44:15.252916631Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:15.255807 containerd[1512]: time="2025-06-20T19:44:15.255760302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:15.256729 containerd[1512]: time="2025-06-20T19:44:15.256701821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 4.530525861s" Jun 20 19:44:15.256860 containerd[1512]: time="2025-06-20T19:44:15.256840350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:44:15.258532 containerd[1512]: time="2025-06-20T19:44:15.258497544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:44:15.260927 containerd[1512]: time="2025-06-20T19:44:15.259863900Z" level=info msg="CreateContainer within sandbox \"b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:44:15.276100 containerd[1512]: time="2025-06-20T19:44:15.276061336Z" level=info msg="Container 8fd3d713011630c27f44205caf5a9465531bd2e236971d8d07c8ea985d5fa5d0: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:15.286382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675997846.mount: Deactivated successfully. Jun 20 19:44:15.291196 containerd[1512]: time="2025-06-20T19:44:15.291092451Z" level=info msg="CreateContainer within sandbox \"b8508a8c95f85918df7f703505b770619c2a8988b1b1e046f12fff1f88517fc5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8fd3d713011630c27f44205caf5a9465531bd2e236971d8d07c8ea985d5fa5d0\"" Jun 20 19:44:15.292016 containerd[1512]: time="2025-06-20T19:44:15.291994376Z" level=info msg="StartContainer for \"8fd3d713011630c27f44205caf5a9465531bd2e236971d8d07c8ea985d5fa5d0\"" Jun 20 19:44:15.295100 containerd[1512]: time="2025-06-20T19:44:15.295017755Z" level=info msg="connecting to shim 8fd3d713011630c27f44205caf5a9465531bd2e236971d8d07c8ea985d5fa5d0" address="unix:///run/containerd/s/da74028ec931b906befcaf17064037ef38ae295892c51ef2d7ed2d363376c387" protocol=ttrpc version=3 Jun 20 19:44:15.328372 systemd[1]: Started cri-containerd-8fd3d713011630c27f44205caf5a9465531bd2e236971d8d07c8ea985d5fa5d0.scope - libcontainer container 8fd3d713011630c27f44205caf5a9465531bd2e236971d8d07c8ea985d5fa5d0. Jun 20 19:44:15.402136 containerd[1512]: time="2025-06-20T19:44:15.402098378Z" level=info msg="StartContainer for \"8fd3d713011630c27f44205caf5a9465531bd2e236971d8d07c8ea985d5fa5d0\" returns successfully" Jun 20 19:44:16.035937 kubelet[2775]: I0620 19:44:16.034047 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bcf54c67d-bm2gt" podStartSLOduration=36.053209962 podStartE2EDuration="49.033970825s" podCreationTimestamp="2025-06-20 19:43:27 +0000 UTC" firstStartedPulling="2025-06-20 19:44:02.277308406 +0000 UTC m=+53.992933974" lastFinishedPulling="2025-06-20 19:44:15.258069259 +0000 UTC m=+66.973694837" observedRunningTime="2025-06-20 19:44:16.031859989 +0000 UTC m=+67.747485567" watchObservedRunningTime="2025-06-20 19:44:16.033970825 +0000 UTC m=+67.749596393" Jun 20 19:44:17.012080 kubelet[2775]: I0620 19:44:17.012032 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:44:17.425019 containerd[1512]: time="2025-06-20T19:44:17.424042560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:17.426296 containerd[1512]: time="2025-06-20T19:44:17.426247212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 20 19:44:17.428393 containerd[1512]: time="2025-06-20T19:44:17.428348701Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:17.432248 containerd[1512]: time="2025-06-20T19:44:17.432219894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:17.433471 containerd[1512]: time="2025-06-20T19:44:17.433403518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 2.174873684s" Jun 20 19:44:17.433471 containerd[1512]: time="2025-06-20T19:44:17.433469522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 20 19:44:17.436367 containerd[1512]: time="2025-06-20T19:44:17.436320910Z" level=info msg="CreateContainer within sandbox \"f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:44:17.438028 containerd[1512]: time="2025-06-20T19:44:17.437980278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:44:17.452065 containerd[1512]: time="2025-06-20T19:44:17.451450748Z" level=info msg="Container b5244c3bf2af953274789c59135b848aa0feba66fc870378522d6d11396dc047: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:17.460275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854593220.mount: Deactivated successfully. Jun 20 19:44:17.466761 containerd[1512]: time="2025-06-20T19:44:17.466723935Z" level=info msg="CreateContainer within sandbox \"f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b5244c3bf2af953274789c59135b848aa0feba66fc870378522d6d11396dc047\"" Jun 20 19:44:17.468133 containerd[1512]: time="2025-06-20T19:44:17.468046219Z" level=info msg="StartContainer for \"b5244c3bf2af953274789c59135b848aa0feba66fc870378522d6d11396dc047\"" Jun 20 19:44:17.469735 containerd[1512]: time="2025-06-20T19:44:17.469700318Z" level=info msg="connecting to shim b5244c3bf2af953274789c59135b848aa0feba66fc870378522d6d11396dc047" address="unix:///run/containerd/s/f7341fd6965377b57e80584f37c0f71f45bbf80ca8c77127baa11cc8933a4e4d" protocol=ttrpc version=3 Jun 20 19:44:17.508427 systemd[1]: Started cri-containerd-b5244c3bf2af953274789c59135b848aa0feba66fc870378522d6d11396dc047.scope - libcontainer container b5244c3bf2af953274789c59135b848aa0feba66fc870378522d6d11396dc047. Jun 20 19:44:17.583965 containerd[1512]: time="2025-06-20T19:44:17.583864164Z" level=info msg="StartContainer for \"b5244c3bf2af953274789c59135b848aa0feba66fc870378522d6d11396dc047\" returns successfully" Jun 20 19:44:17.917304 containerd[1512]: time="2025-06-20T19:44:17.916646691Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:17.918523 containerd[1512]: time="2025-06-20T19:44:17.918403622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:44:17.924846 containerd[1512]: time="2025-06-20T19:44:17.924748644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 486.708323ms" Jun 20 19:44:17.924846 containerd[1512]: time="2025-06-20T19:44:17.924830958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:44:17.931042 containerd[1512]: time="2025-06-20T19:44:17.928678146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:44:17.931806 containerd[1512]: time="2025-06-20T19:44:17.931596771Z" level=info msg="CreateContainer within sandbox \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:44:17.954235 containerd[1512]: time="2025-06-20T19:44:17.950377458Z" level=info msg="Container 9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:17.982604 containerd[1512]: time="2025-06-20T19:44:17.982475549Z" level=info msg="CreateContainer within sandbox \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\"" Jun 20 19:44:17.986339 containerd[1512]: time="2025-06-20T19:44:17.984543755Z" level=info msg="StartContainer for \"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\"" Jun 20 19:44:17.990951 containerd[1512]: time="2025-06-20T19:44:17.990804418Z" level=info msg="connecting to shim 9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c" address="unix:///run/containerd/s/a78eee3c50c1f581041f8a489742d89d5fc9bb1783a4c1744f1a555decc16269" protocol=ttrpc version=3 Jun 20 19:44:18.054340 systemd[1]: Started cri-containerd-9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c.scope - libcontainer container 9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c. Jun 20 19:44:18.118426 containerd[1512]: time="2025-06-20T19:44:18.118313858Z" level=info msg="StartContainer for \"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\" returns successfully" Jun 20 19:44:19.063992 kubelet[2775]: I0620 19:44:19.063774 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8f5b5557d-j7slj" podStartSLOduration=39.580546568 podStartE2EDuration="53.063744101s" podCreationTimestamp="2025-06-20 19:43:26 +0000 UTC" firstStartedPulling="2025-06-20 19:44:04.443357635 +0000 UTC m=+56.158983213" lastFinishedPulling="2025-06-20 19:44:17.926555128 +0000 UTC m=+69.642180746" observedRunningTime="2025-06-20 19:44:19.057219228 +0000 UTC m=+70.772844806" watchObservedRunningTime="2025-06-20 19:44:19.063744101 +0000 UTC m=+70.779369679" Jun 20 19:44:21.178934 containerd[1512]: time="2025-06-20T19:44:21.178844257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352\" id:\"b1d70ca3b0e83dce2eabb5ad54f8c883164b32ccea16dc94d0d0525c8c12b010\" pid:5161 exited_at:{seconds:1750448661 nanos:178091842}" Jun 20 19:44:24.651853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237191439.mount: Deactivated successfully. Jun 20 19:44:25.771937 containerd[1512]: time="2025-06-20T19:44:25.770935055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:25.774165 containerd[1512]: time="2025-06-20T19:44:25.773898348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 20 19:44:25.776251 containerd[1512]: time="2025-06-20T19:44:25.776020539Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:25.782733 containerd[1512]: time="2025-06-20T19:44:25.782601926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:25.789568 containerd[1512]: time="2025-06-20T19:44:25.788328517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 7.856924207s" Jun 20 19:44:25.789568 containerd[1512]: time="2025-06-20T19:44:25.789305754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 20 19:44:25.799231 containerd[1512]: time="2025-06-20T19:44:25.798815168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:44:25.808534 containerd[1512]: time="2025-06-20T19:44:25.808313000Z" level=info msg="CreateContainer within sandbox \"87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:44:25.829679 containerd[1512]: time="2025-06-20T19:44:25.829639388Z" level=info msg="Container 149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:25.840789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333977636.mount: Deactivated successfully. Jun 20 19:44:25.854132 containerd[1512]: time="2025-06-20T19:44:25.854062850Z" level=info msg="CreateContainer within sandbox \"87c93c2ae6241879d545558c34f329023f664c9c0caf0ff344d52097dce0c01b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2\"" Jun 20 19:44:25.855590 containerd[1512]: time="2025-06-20T19:44:25.855529498Z" level=info msg="StartContainer for \"149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2\"" Jun 20 19:44:25.859581 containerd[1512]: time="2025-06-20T19:44:25.859549930Z" level=info msg="connecting to shim 149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2" address="unix:///run/containerd/s/85f00f0ed390b3505dfc2100f5b1b35c72e36167abdc418aca2317651ca3aa81" protocol=ttrpc version=3 Jun 20 19:44:25.914378 systemd[1]: Started cri-containerd-149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2.scope - libcontainer container 149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2. Jun 20 19:44:26.031728 containerd[1512]: time="2025-06-20T19:44:26.031594001Z" level=info msg="StartContainer for \"149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2\" returns successfully" Jun 20 19:44:26.110816 kubelet[2775]: I0620 19:44:26.110716 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-nq7xc" podStartSLOduration=35.932706554 podStartE2EDuration="57.110651936s" podCreationTimestamp="2025-06-20 19:43:29 +0000 UTC" firstStartedPulling="2025-06-20 19:44:04.616879141 +0000 UTC m=+56.332504719" lastFinishedPulling="2025-06-20 19:44:25.794824473 +0000 UTC m=+77.510450101" observedRunningTime="2025-06-20 19:44:26.105562053 +0000 UTC m=+77.821187621" watchObservedRunningTime="2025-06-20 19:44:26.110651936 +0000 UTC m=+77.826277524" Jun 20 19:44:26.327643 containerd[1512]: time="2025-06-20T19:44:26.327403091Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:26.330303 containerd[1512]: time="2025-06-20T19:44:26.329791211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:44:26.339731 containerd[1512]: time="2025-06-20T19:44:26.339650445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 539.785463ms" Jun 20 19:44:26.340060 containerd[1512]: time="2025-06-20T19:44:26.340016414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:44:26.345563 containerd[1512]: time="2025-06-20T19:44:26.345335519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:44:26.348761 containerd[1512]: time="2025-06-20T19:44:26.348616559Z" level=info msg="CreateContainer within sandbox \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:44:26.374585 containerd[1512]: time="2025-06-20T19:44:26.372716147Z" level=info msg="Container fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:26.423458 containerd[1512]: time="2025-06-20T19:44:26.423245826Z" level=info msg="CreateContainer within sandbox \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\"" Jun 20 19:44:26.427166 containerd[1512]: time="2025-06-20T19:44:26.427062023Z" level=info msg="StartContainer for \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\"" Jun 20 19:44:26.431454 containerd[1512]: time="2025-06-20T19:44:26.431363483Z" level=info msg="connecting to shim fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38" address="unix:///run/containerd/s/d2e3e52c4f10d534bac0a51a81bed69ea3f471514bacc73d882dee740524127c" protocol=ttrpc version=3 Jun 20 19:44:26.470715 systemd[1]: Started cri-containerd-fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38.scope - libcontainer container fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38. Jun 20 19:44:26.545975 containerd[1512]: time="2025-06-20T19:44:26.545649568Z" level=info msg="StartContainer for \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" returns successfully" Jun 20 19:44:26.560114 kubelet[2775]: I0620 19:44:26.559820 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:44:26.725753 systemd[1]: Created slice kubepods-besteffort-pod9a765b41_b061_440e_8b8b_9f299c17544d.slice - libcontainer container kubepods-besteffort-pod9a765b41_b061_440e_8b8b_9f299c17544d.slice. Jun 20 19:44:26.813781 kubelet[2775]: I0620 19:44:26.813715 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jdb\" (UniqueName: \"kubernetes.io/projected/9a765b41-b061-440e-8b8b-9f299c17544d-kube-api-access-t6jdb\") pod \"calico-apiserver-7bcf54c67d-7kjpw\" (UID: \"9a765b41-b061-440e-8b8b-9f299c17544d\") " pod="calico-apiserver/calico-apiserver-7bcf54c67d-7kjpw" Jun 20 19:44:26.813781 kubelet[2775]: I0620 19:44:26.813789 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a765b41-b061-440e-8b8b-9f299c17544d-calico-apiserver-certs\") pod \"calico-apiserver-7bcf54c67d-7kjpw\" (UID: \"9a765b41-b061-440e-8b8b-9f299c17544d\") " pod="calico-apiserver/calico-apiserver-7bcf54c67d-7kjpw" Jun 20 19:44:27.032852 containerd[1512]: time="2025-06-20T19:44:27.032795376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf54c67d-7kjpw,Uid:9a765b41-b061-440e-8b8b-9f299c17544d,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:44:27.129095 kubelet[2775]: I0620 19:44:27.128827 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8f5b5557d-6rptn" podStartSLOduration=39.570989291 podStartE2EDuration="1m1.128620636s" podCreationTimestamp="2025-06-20 19:43:26 +0000 UTC" firstStartedPulling="2025-06-20 19:44:04.785355426 +0000 UTC m=+56.500981004" lastFinishedPulling="2025-06-20 19:44:26.34298673 +0000 UTC m=+78.058612349" observedRunningTime="2025-06-20 19:44:27.127858252 +0000 UTC m=+78.843483820" watchObservedRunningTime="2025-06-20 19:44:27.128620636 +0000 UTC m=+78.844246214" Jun 20 19:44:27.293907 systemd-networkd[1427]: cali20805c8bc7c: Link UP Jun 20 19:44:27.294683 systemd-networkd[1427]: cali20805c8bc7c: Gained carrier Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.119 [INFO][5271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0 calico-apiserver-7bcf54c67d- calico-apiserver 9a765b41-b061-440e-8b8b-9f299c17544d 1098 0 2025-06-20 19:44:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bcf54c67d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-1-0-8-afb8bdccbb.novalocal calico-apiserver-7bcf54c67d-7kjpw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali20805c8bc7c [] [] }} ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-7kjpw" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.119 [INFO][5271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-7kjpw" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.211 [INFO][5301] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" HandleID="k8s-pod-network.99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.212 [INFO][5301] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" HandleID="k8s-pod-network.99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000389cb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-1-0-8-afb8bdccbb.novalocal", "pod":"calico-apiserver-7bcf54c67d-7kjpw", "timestamp":"2025-06-20 19:44:27.21187557 +0000 UTC"}, Hostname:"ci-4344-1-0-8-afb8bdccbb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.212 [INFO][5301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.212 [INFO][5301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.212 [INFO][5301] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-0-8-afb8bdccbb.novalocal' Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.229 [INFO][5301] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.240 [INFO][5301] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.250 [INFO][5301] ipam/ipam.go 511: Trying affinity for 192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.254 [INFO][5301] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.259 [INFO][5301] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.259 [INFO][5301] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.262 [INFO][5301] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328 Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.268 [INFO][5301] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.281 [INFO][5301] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.25.74/26] block=192.168.25.64/26 handle="k8s-pod-network.99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.281 [INFO][5301] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.74/26] handle="k8s-pod-network.99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" host="ci-4344-1-0-8-afb8bdccbb.novalocal" Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.281 [INFO][5301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:27.329088 containerd[1512]: 2025-06-20 19:44:27.281 [INFO][5301] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.25.74/26] IPv6=[] ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" HandleID="k8s-pod-network.99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0" Jun 20 19:44:27.329936 containerd[1512]: 2025-06-20 19:44:27.284 [INFO][5271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-7kjpw" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0", GenerateName:"calico-apiserver-7bcf54c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a765b41-b061-440e-8b8b-9f299c17544d", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcf54c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"", Pod:"calico-apiserver-7bcf54c67d-7kjpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali20805c8bc7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:27.329936 containerd[1512]: 2025-06-20 19:44:27.284 [INFO][5271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.74/32] ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-7kjpw" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0" Jun 20 19:44:27.329936 containerd[1512]: 2025-06-20 19:44:27.284 [INFO][5271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20805c8bc7c ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-7kjpw" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0" Jun 20 19:44:27.329936 containerd[1512]: 2025-06-20 19:44:27.295 [INFO][5271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-7kjpw" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0" Jun 20 19:44:27.329936 containerd[1512]: 2025-06-20 19:44:27.296 [INFO][5271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-7kjpw" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0", GenerateName:"calico-apiserver-7bcf54c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a765b41-b061-440e-8b8b-9f299c17544d", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcf54c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-0-8-afb8bdccbb.novalocal", ContainerID:"99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328", Pod:"calico-apiserver-7bcf54c67d-7kjpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali20805c8bc7c", MAC:"4e:9b:31:45:07:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:44:27.329936 containerd[1512]: 2025-06-20 19:44:27.317 [INFO][5271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf54c67d-7kjpw" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--7bcf54c67d--7kjpw-eth0" Jun 20 19:44:27.379223 containerd[1512]: time="2025-06-20T19:44:27.379089774Z" level=info msg="connecting to shim 99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328" address="unix:///run/containerd/s/74bfbae38822e2e4f4d02d646e86da1a87745336f7efe32986ae2a36686129ef" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:44:27.420756 systemd[1]: Started cri-containerd-99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328.scope - libcontainer container 99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328. Jun 20 19:44:27.539230 containerd[1512]: time="2025-06-20T19:44:27.538859324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2\" id:\"989fd07591d5ad227239ad1d10b62aa6411813c03222c589768acad35bf2323e\" pid:5294 exit_status:1 exited_at:{seconds:1750448667 nanos:538140993}" Jun 20 19:44:27.549624 containerd[1512]: time="2025-06-20T19:44:27.549452339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf54c67d-7kjpw,Uid:9a765b41-b061-440e-8b8b-9f299c17544d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328\"" Jun 20 19:44:27.557253 containerd[1512]: time="2025-06-20T19:44:27.556277057Z" level=info msg="CreateContainer within sandbox \"99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:44:27.570075 containerd[1512]: time="2025-06-20T19:44:27.570039214Z" level=info msg="Container 36e5de41c198f8ce2dec255c9663957568bad83aeec109c48debcdcfc05e0964: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:27.582864 containerd[1512]: time="2025-06-20T19:44:27.582826667Z" level=info msg="CreateContainer within sandbox \"99a4f054e90f7290804c3e165ed830ea57f2fbc1dcc8af8f3d7dc734ddc59328\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"36e5de41c198f8ce2dec255c9663957568bad83aeec109c48debcdcfc05e0964\"" Jun 20 19:44:27.583722 containerd[1512]: time="2025-06-20T19:44:27.583684511Z" level=info msg="StartContainer for \"36e5de41c198f8ce2dec255c9663957568bad83aeec109c48debcdcfc05e0964\"" Jun 20 19:44:27.585712 containerd[1512]: time="2025-06-20T19:44:27.585685214Z" level=info msg="connecting to shim 36e5de41c198f8ce2dec255c9663957568bad83aeec109c48debcdcfc05e0964" address="unix:///run/containerd/s/74bfbae38822e2e4f4d02d646e86da1a87745336f7efe32986ae2a36686129ef" protocol=ttrpc version=3 Jun 20 19:44:27.616690 systemd[1]: Started cri-containerd-36e5de41c198f8ce2dec255c9663957568bad83aeec109c48debcdcfc05e0964.scope - libcontainer container 36e5de41c198f8ce2dec255c9663957568bad83aeec109c48debcdcfc05e0964. Jun 20 19:44:27.769203 containerd[1512]: time="2025-06-20T19:44:27.767587807Z" level=info msg="StartContainer for \"36e5de41c198f8ce2dec255c9663957568bad83aeec109c48debcdcfc05e0964\" returns successfully" Jun 20 19:44:28.099690 kubelet[2775]: I0620 19:44:28.099650 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:44:28.103676 containerd[1512]: time="2025-06-20T19:44:28.102765241Z" level=info msg="StopContainer for \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" with timeout 30 (s)" Jun 20 19:44:28.105134 containerd[1512]: time="2025-06-20T19:44:28.104846495Z" level=info msg="Stop container \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" with signal terminated" Jun 20 19:44:28.144514 systemd[1]: cri-containerd-fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38.scope: Deactivated successfully. Jun 20 19:44:28.149920 containerd[1512]: time="2025-06-20T19:44:28.149789903Z" level=info msg="received exit event container_id:\"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" id:\"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" pid:5241 exit_status:1 exited_at:{seconds:1750448668 nanos:149269193}" Jun 20 19:44:28.151683 containerd[1512]: time="2025-06-20T19:44:28.151481194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" id:\"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" pid:5241 exit_status:1 exited_at:{seconds:1750448668 nanos:149269193}" Jun 20 19:44:28.194848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38-rootfs.mount: Deactivated successfully. Jun 20 19:44:28.339673 containerd[1512]: time="2025-06-20T19:44:28.339463786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2\" id:\"25c4b92b6caef824ceb7ff53e520d5d2151321bf19775bc81b1335f7c5dd7da2\" pid:5423 exit_status:1 exited_at:{seconds:1750448668 nanos:337625038}" Jun 20 19:44:28.951771 systemd-networkd[1427]: cali20805c8bc7c: Gained IPv6LL Jun 20 19:44:29.264050 containerd[1512]: time="2025-06-20T19:44:29.262157378Z" level=info msg="StopContainer for \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" returns successfully" Jun 20 19:44:29.267246 containerd[1512]: time="2025-06-20T19:44:29.267143358Z" level=info msg="StopPodSandbox for \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\"" Jun 20 19:44:29.270438 containerd[1512]: time="2025-06-20T19:44:29.270404453Z" level=info msg="Container to stop \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:44:29.303641 systemd[1]: cri-containerd-50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738.scope: Deactivated successfully. Jun 20 19:44:29.306469 containerd[1512]: time="2025-06-20T19:44:29.306379455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" id:\"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" pid:4771 exit_status:137 exited_at:{seconds:1750448669 nanos:305884964}" Jun 20 19:44:29.383854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738-rootfs.mount: Deactivated successfully. Jun 20 19:44:29.386774 containerd[1512]: time="2025-06-20T19:44:29.386636458Z" level=info msg="shim disconnected" id=50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738 namespace=k8s.io Jun 20 19:44:29.386774 containerd[1512]: time="2025-06-20T19:44:29.386685009Z" level=warning msg="cleaning up after shim disconnected" id=50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738 namespace=k8s.io Jun 20 19:44:29.386774 containerd[1512]: time="2025-06-20T19:44:29.386696390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:44:29.391187 containerd[1512]: time="2025-06-20T19:44:29.391129911Z" level=info msg="received exit event sandbox_id:\"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" exit_status:137 exited_at:{seconds:1750448669 nanos:305884964}" Jun 20 19:44:29.396367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738-shm.mount: Deactivated successfully. Jun 20 19:44:29.527588 kubelet[2775]: I0620 19:44:29.527145 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bcf54c67d-7kjpw" podStartSLOduration=3.526957402 podStartE2EDuration="3.526957402s" podCreationTimestamp="2025-06-20 19:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:44:28.128460638 +0000 UTC m=+79.844086206" watchObservedRunningTime="2025-06-20 19:44:29.526957402 +0000 UTC m=+81.242582980" Jun 20 19:44:29.529501 systemd-networkd[1427]: calif5e997cd367: Link DOWN Jun 20 19:44:29.529506 systemd-networkd[1427]: calif5e997cd367: Lost carrier Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.523 [INFO][5503] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.523 [INFO][5503] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" iface="eth0" netns="/var/run/netns/cni-88b2cfd4-7edd-0753-65cc-c606d8c31638" Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.525 [INFO][5503] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" iface="eth0" netns="/var/run/netns/cni-88b2cfd4-7edd-0753-65cc-c606d8c31638" Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.543 [INFO][5503] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" after=20.07099ms iface="eth0" netns="/var/run/netns/cni-88b2cfd4-7edd-0753-65cc-c606d8c31638" Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.543 [INFO][5503] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.543 [INFO][5503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.629 [INFO][5510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.629 [INFO][5510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.629 [INFO][5510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.711 [INFO][5510] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.711 [INFO][5510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.714 [INFO][5510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:29.722328 containerd[1512]: 2025-06-20 19:44:29.720 [INFO][5503] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:44:29.725939 containerd[1512]: time="2025-06-20T19:44:29.725758349Z" level=info msg="TearDown network for sandbox \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" successfully" Jun 20 19:44:29.726043 containerd[1512]: time="2025-06-20T19:44:29.725909083Z" level=info msg="StopPodSandbox for \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" returns successfully" Jun 20 19:44:29.731556 systemd[1]: run-netns-cni\x2d88b2cfd4\x2d7edd\x2d0753\x2d65cc\x2dc606d8c31638.mount: Deactivated successfully. Jun 20 19:44:29.845480 kubelet[2775]: I0620 19:44:29.845348 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0-calico-apiserver-certs\") pod \"47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0\" (UID: \"47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0\") " Jun 20 19:44:29.845480 kubelet[2775]: I0620 19:44:29.845422 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp5lt\" (UniqueName: \"kubernetes.io/projected/47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0-kube-api-access-hp5lt\") pod \"47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0\" (UID: \"47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0\") " Jun 20 19:44:29.862462 kubelet[2775]: I0620 19:44:29.862397 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0-kube-api-access-hp5lt" (OuterVolumeSpecName: "kube-api-access-hp5lt") pod "47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0" (UID: "47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0"). InnerVolumeSpecName "kube-api-access-hp5lt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:44:29.864401 kubelet[2775]: I0620 19:44:29.864313 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0" (UID: "47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:44:29.866084 systemd[1]: var-lib-kubelet-pods-47185c6a\x2dc8ab\x2d4dd4\x2d92fc\x2dfb3871ec6bc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhp5lt.mount: Deactivated successfully. Jun 20 19:44:29.872822 systemd[1]: var-lib-kubelet-pods-47185c6a\x2dc8ab\x2d4dd4\x2d92fc\x2dfb3871ec6bc0-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:44:29.938573 containerd[1512]: time="2025-06-20T19:44:29.938527867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:29.941950 containerd[1512]: time="2025-06-20T19:44:29.941771610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 20 19:44:29.944358 containerd[1512]: time="2025-06-20T19:44:29.944097585Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:29.949700 kubelet[2775]: I0620 19:44:29.949554 2775 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0-calico-apiserver-certs\") on node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" DevicePath \"\"" Jun 20 19:44:29.949700 kubelet[2775]: I0620 19:44:29.949591 2775 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hp5lt\" (UniqueName: \"kubernetes.io/projected/47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0-kube-api-access-hp5lt\") on node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" DevicePath \"\"" Jun 20 19:44:29.953108 containerd[1512]: time="2025-06-20T19:44:29.953041723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:29.954778 containerd[1512]: time="2025-06-20T19:44:29.954737102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 3.608759737s" Jun 20 19:44:29.954976 containerd[1512]: time="2025-06-20T19:44:29.954869801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 20 19:44:29.957498 containerd[1512]: time="2025-06-20T19:44:29.957287230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:44:29.959389 containerd[1512]: time="2025-06-20T19:44:29.959334762Z" level=info msg="CreateContainer within sandbox \"59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:44:29.978580 containerd[1512]: time="2025-06-20T19:44:29.978541717Z" level=info msg="Container 08bb47a7dce2497f1f275f0b403dbdbe7f501589dac4dfa21d9ab7c43bf2465c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:29.997192 containerd[1512]: time="2025-06-20T19:44:29.996712623Z" level=info msg="CreateContainer within sandbox \"59151367d6e5b72b9cdcc13e0d8444ea173bf9663b1a476e85abdcab03fd370d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"08bb47a7dce2497f1f275f0b403dbdbe7f501589dac4dfa21d9ab7c43bf2465c\"" Jun 20 19:44:29.998119 containerd[1512]: time="2025-06-20T19:44:29.998098040Z" level=info msg="StartContainer for \"08bb47a7dce2497f1f275f0b403dbdbe7f501589dac4dfa21d9ab7c43bf2465c\"" Jun 20 19:44:30.000456 containerd[1512]: time="2025-06-20T19:44:30.000381947Z" level=info msg="connecting to shim 08bb47a7dce2497f1f275f0b403dbdbe7f501589dac4dfa21d9ab7c43bf2465c" address="unix:///run/containerd/s/5d67fe74056534e9852f7c0e0c87950189e032c0d03b3ad48c79bf4c3115c36b" protocol=ttrpc version=3 Jun 20 19:44:30.033415 systemd[1]: Started cri-containerd-08bb47a7dce2497f1f275f0b403dbdbe7f501589dac4dfa21d9ab7c43bf2465c.scope - libcontainer container 08bb47a7dce2497f1f275f0b403dbdbe7f501589dac4dfa21d9ab7c43bf2465c. Jun 20 19:44:30.103548 containerd[1512]: time="2025-06-20T19:44:30.103347449Z" level=info msg="StartContainer for \"08bb47a7dce2497f1f275f0b403dbdbe7f501589dac4dfa21d9ab7c43bf2465c\" returns successfully" Jun 20 19:44:30.107699 kubelet[2775]: I0620 19:44:30.107550 2775 scope.go:117] "RemoveContainer" containerID="fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38" Jun 20 19:44:30.114693 containerd[1512]: time="2025-06-20T19:44:30.114034426Z" level=info msg="RemoveContainer for \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\"" Jun 20 19:44:30.116534 kubelet[2775]: I0620 19:44:30.116507 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:44:30.117570 systemd[1]: Removed slice kubepods-besteffort-pod47185c6a_c8ab_4dd4_92fc_fb3871ec6bc0.slice - libcontainer container kubepods-besteffort-pod47185c6a_c8ab_4dd4_92fc_fb3871ec6bc0.slice. Jun 20 19:44:30.123923 containerd[1512]: time="2025-06-20T19:44:30.123871646Z" level=info msg="RemoveContainer for \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" returns successfully" Jun 20 19:44:30.124492 kubelet[2775]: I0620 19:44:30.124474 2775 scope.go:117] "RemoveContainer" containerID="fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38" Jun 20 19:44:30.124953 containerd[1512]: time="2025-06-20T19:44:30.124881957Z" level=error msg="ContainerStatus for \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\": not found" Jun 20 19:44:30.125242 kubelet[2775]: E0620 19:44:30.125139 2775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\": not found" containerID="fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38" Jun 20 19:44:30.125406 kubelet[2775]: I0620 19:44:30.125190 2775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38"} err="failed to get container status \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc937b7784dc2aaecefddef1d8dc79fe5cf5730ae9cba7eba0eeea46455fea38\": not found" Jun 20 19:44:30.173776 kubelet[2775]: I0620 19:44:30.172940 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vb48f" podStartSLOduration=32.301846342 podStartE2EDuration="1m0.172918439s" podCreationTimestamp="2025-06-20 19:43:30 +0000 UTC" firstStartedPulling="2025-06-20 19:44:02.085524985 +0000 UTC m=+53.801150553" lastFinishedPulling="2025-06-20 19:44:29.956597082 +0000 UTC m=+81.672222650" observedRunningTime="2025-06-20 19:44:30.152351031 +0000 UTC m=+81.867976619" watchObservedRunningTime="2025-06-20 19:44:30.172918439 +0000 UTC m=+81.888544007" Jun 20 19:44:30.416488 kubelet[2775]: I0620 19:44:30.416370 2775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0" path="/var/lib/kubelet/pods/47185c6a-c8ab-4dd4-92fc-fb3871ec6bc0/volumes" Jun 20 19:44:30.588419 kubelet[2775]: I0620 19:44:30.588359 2775 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:44:30.589521 kubelet[2775]: I0620 19:44:30.589154 2775 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:44:31.053767 containerd[1512]: time="2025-06-20T19:44:31.053715562Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d\" id:\"74dd219506ffe5dd1eca44f3ddf650c6b7195ddbea8af7f3fee93f6975306a4d\" pid:5569 exited_at:{seconds:1750448671 nanos:53279391}" Jun 20 19:44:33.065972 containerd[1512]: time="2025-06-20T19:44:33.065924030Z" level=info msg="StopContainer for \"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\" with timeout 30 (s)" Jun 20 19:44:33.079425 containerd[1512]: time="2025-06-20T19:44:33.079249329Z" level=info msg="Stop container \"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\" with signal terminated" Jun 20 19:44:33.141679 systemd[1]: cri-containerd-9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c.scope: Deactivated successfully. Jun 20 19:44:33.145377 systemd[1]: cri-containerd-9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c.scope: Consumed 1.936s CPU time, 57.3M memory peak. Jun 20 19:44:33.167327 containerd[1512]: time="2025-06-20T19:44:33.167212765Z" level=info msg="received exit event container_id:\"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\" id:\"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\" pid:5118 exit_status:1 exited_at:{seconds:1750448673 nanos:166698618}" Jun 20 19:44:33.167795 containerd[1512]: time="2025-06-20T19:44:33.167749675Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\" id:\"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\" pid:5118 exit_status:1 exited_at:{seconds:1750448673 nanos:166698618}" Jun 20 19:44:33.261165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c-rootfs.mount: Deactivated successfully. Jun 20 19:44:33.609742 containerd[1512]: time="2025-06-20T19:44:33.609577881Z" level=info msg="StopContainer for \"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\" returns successfully" Jun 20 19:44:33.612192 containerd[1512]: time="2025-06-20T19:44:33.612105940Z" level=info msg="StopPodSandbox for \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\"" Jun 20 19:44:33.612318 containerd[1512]: time="2025-06-20T19:44:33.612286920Z" level=info msg="Container to stop \"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:44:33.649796 systemd[1]: cri-containerd-41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597.scope: Deactivated successfully. Jun 20 19:44:33.657820 containerd[1512]: time="2025-06-20T19:44:33.657375065Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" id:\"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" pid:4678 exit_status:137 exited_at:{seconds:1750448673 nanos:654980338}" Jun 20 19:44:33.752534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597-rootfs.mount: Deactivated successfully. Jun 20 19:44:33.757597 containerd[1512]: time="2025-06-20T19:44:33.757416221Z" level=info msg="shim disconnected" id=41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597 namespace=k8s.io Jun 20 19:44:33.757597 containerd[1512]: time="2025-06-20T19:44:33.757464142Z" level=warning msg="cleaning up after shim disconnected" id=41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597 namespace=k8s.io Jun 20 19:44:33.757755 containerd[1512]: time="2025-06-20T19:44:33.757474291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:44:33.924995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204505278.mount: Deactivated successfully. Jun 20 19:44:33.944478 containerd[1512]: time="2025-06-20T19:44:33.944394032Z" level=info msg="received exit event sandbox_id:\"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" exit_status:137 exited_at:{seconds:1750448673 nanos:654980338}" Jun 20 19:44:33.957259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597-shm.mount: Deactivated successfully. Jun 20 19:44:34.003643 containerd[1512]: time="2025-06-20T19:44:34.003584989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:34.006329 containerd[1512]: time="2025-06-20T19:44:34.006300930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 20 19:44:34.010197 containerd[1512]: time="2025-06-20T19:44:34.008919670Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:34.019205 containerd[1512]: time="2025-06-20T19:44:34.018078117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:44:34.022739 containerd[1512]: time="2025-06-20T19:44:34.022013304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 4.064695316s" Jun 20 19:44:34.022739 containerd[1512]: time="2025-06-20T19:44:34.022075921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 20 19:44:34.029491 containerd[1512]: time="2025-06-20T19:44:34.029450050Z" level=info msg="CreateContainer within sandbox \"f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:44:34.047888 containerd[1512]: time="2025-06-20T19:44:34.047823131Z" level=info msg="Container cdca83cb7ebc7c1e6d70f9656d80db02631d3b001292dff8867cf87cb1bab8b1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:44:34.075558 containerd[1512]: time="2025-06-20T19:44:34.075427366Z" level=info msg="CreateContainer within sandbox \"f65024d6269762c1733e0b9b55313ca4592230aab1490a54e5816132b74553ed\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"cdca83cb7ebc7c1e6d70f9656d80db02631d3b001292dff8867cf87cb1bab8b1\"" Jun 20 19:44:34.080234 containerd[1512]: time="2025-06-20T19:44:34.078110717Z" level=info msg="StartContainer for \"cdca83cb7ebc7c1e6d70f9656d80db02631d3b001292dff8867cf87cb1bab8b1\"" Jun 20 19:44:34.080697 containerd[1512]: time="2025-06-20T19:44:34.080669924Z" level=info msg="connecting to shim cdca83cb7ebc7c1e6d70f9656d80db02631d3b001292dff8867cf87cb1bab8b1" address="unix:///run/containerd/s/f7341fd6965377b57e80584f37c0f71f45bbf80ca8c77127baa11cc8933a4e4d" protocol=ttrpc version=3 Jun 20 19:44:34.126257 systemd[1]: Started cri-containerd-cdca83cb7ebc7c1e6d70f9656d80db02631d3b001292dff8867cf87cb1bab8b1.scope - libcontainer container cdca83cb7ebc7c1e6d70f9656d80db02631d3b001292dff8867cf87cb1bab8b1. Jun 20 19:44:34.137305 systemd-networkd[1427]: cali85c828e663b: Link DOWN Jun 20 19:44:34.137313 systemd-networkd[1427]: cali85c828e663b: Lost carrier Jun 20 19:44:34.168600 kubelet[2775]: I0620 19:44:34.168554 2775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.127 [INFO][5659] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.130 [INFO][5659] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" iface="eth0" netns="/var/run/netns/cni-4d4e1ef7-faa5-19ed-7a7a-4ce8804cd840" Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.130 [INFO][5659] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" iface="eth0" netns="/var/run/netns/cni-4d4e1ef7-faa5-19ed-7a7a-4ce8804cd840" Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.149 [INFO][5659] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" after=19.136958ms iface="eth0" netns="/var/run/netns/cni-4d4e1ef7-faa5-19ed-7a7a-4ce8804cd840" Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.149 [INFO][5659] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.149 [INFO][5659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.234 [INFO][5683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.235 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.235 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.322 [INFO][5683] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.322 [INFO][5683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.324 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:44:34.330580 containerd[1512]: 2025-06-20 19:44:34.326 [INFO][5659] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:44:34.332399 containerd[1512]: time="2025-06-20T19:44:34.332320613Z" level=info msg="TearDown network for sandbox \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" successfully" Jun 20 19:44:34.332399 containerd[1512]: time="2025-06-20T19:44:34.332350950Z" level=info msg="StopPodSandbox for \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" returns successfully" Jun 20 19:44:34.445993 containerd[1512]: time="2025-06-20T19:44:34.445730226Z" level=info msg="StartContainer for \"cdca83cb7ebc7c1e6d70f9656d80db02631d3b001292dff8867cf87cb1bab8b1\" returns successfully" Jun 20 19:44:34.493011 kubelet[2775]: I0620 19:44:34.492288 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8-calico-apiserver-certs\") pod \"807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8\" (UID: \"807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8\") " Jun 20 19:44:34.493011 kubelet[2775]: I0620 19:44:34.492408 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lgwk\" (UniqueName: \"kubernetes.io/projected/807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8-kube-api-access-6lgwk\") pod \"807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8\" (UID: \"807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8\") " Jun 20 19:44:34.499974 kubelet[2775]: I0620 19:44:34.499918 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8" (UID: "807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:44:34.500549 kubelet[2775]: I0620 19:44:34.500490 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8-kube-api-access-6lgwk" (OuterVolumeSpecName: "kube-api-access-6lgwk") pod "807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8" (UID: "807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8"). InnerVolumeSpecName "kube-api-access-6lgwk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:44:34.594408 kubelet[2775]: I0620 19:44:34.593682 2775 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8-calico-apiserver-certs\") on node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" DevicePath \"\"" Jun 20 19:44:34.594408 kubelet[2775]: I0620 19:44:34.594313 2775 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6lgwk\" (UniqueName: \"kubernetes.io/projected/807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8-kube-api-access-6lgwk\") on node \"ci-4344-1-0-8-afb8bdccbb.novalocal\" DevicePath \"\"" Jun 20 19:44:34.751449 systemd[1]: run-netns-cni\x2d4d4e1ef7\x2dfaa5\x2d19ed\x2d7a7a\x2d4ce8804cd840.mount: Deactivated successfully. Jun 20 19:44:34.752345 systemd[1]: var-lib-kubelet-pods-807e0faf\x2d4cc3\x2d44e6\x2da4b0\x2ddcafcc5bbcf8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6lgwk.mount: Deactivated successfully. Jun 20 19:44:34.753451 systemd[1]: var-lib-kubelet-pods-807e0faf\x2d4cc3\x2d44e6\x2da4b0\x2ddcafcc5bbcf8-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:44:35.204972 systemd[1]: Removed slice kubepods-besteffort-pod807e0faf_4cc3_44e6_a4b0_dcafcc5bbcf8.slice - libcontainer container kubepods-besteffort-pod807e0faf_4cc3_44e6_a4b0_dcafcc5bbcf8.slice. Jun 20 19:44:35.206072 systemd[1]: kubepods-besteffort-pod807e0faf_4cc3_44e6_a4b0_dcafcc5bbcf8.slice: Consumed 1.970s CPU time, 57.6M memory peak. Jun 20 19:44:35.229345 kubelet[2775]: I0620 19:44:35.229215 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-656c9c5c45-65wq9" podStartSLOduration=4.007105824 podStartE2EDuration="34.227114998s" podCreationTimestamp="2025-06-20 19:44:01 +0000 UTC" firstStartedPulling="2025-06-20 19:44:03.805116868 +0000 UTC m=+55.520742446" lastFinishedPulling="2025-06-20 19:44:34.025126052 +0000 UTC m=+85.740751620" observedRunningTime="2025-06-20 19:44:35.221992676 +0000 UTC m=+86.937618254" watchObservedRunningTime="2025-06-20 19:44:35.227114998 +0000 UTC m=+86.942740576" Jun 20 19:44:36.419073 kubelet[2775]: I0620 19:44:36.418573 2775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8" path="/var/lib/kubelet/pods/807e0faf-4cc3-44e6-a4b0-dcafcc5bbcf8/volumes" Jun 20 19:44:39.072367 containerd[1512]: time="2025-06-20T19:44:39.072288217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352\" id:\"821813c644e174efc50b94612ad296436b7593689f5785dbccea3dbc21f3274e\" pid:5736 exited_at:{seconds:1750448679 nanos:71687376}" Jun 20 19:44:40.505120 systemd[1]: Started sshd@7-172.24.4.229:22-172.24.4.1:45204.service - OpenSSH per-connection server daemon (172.24.4.1:45204). Jun 20 19:44:41.720903 sshd[5752]: Accepted publickey for core from 172.24.4.1 port 45204 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:44:41.726161 sshd-session[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:44:41.739031 systemd-logind[1497]: New session 10 of user core. Jun 20 19:44:41.746357 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:44:42.535085 sshd[5754]: Connection closed by 172.24.4.1 port 45204 Jun 20 19:44:42.535967 sshd-session[5752]: pam_unix(sshd:session): session closed for user core Jun 20 19:44:42.544717 systemd[1]: sshd@7-172.24.4.229:22-172.24.4.1:45204.service: Deactivated successfully. Jun 20 19:44:42.544825 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:44:42.547940 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:44:42.550578 systemd-logind[1497]: Removed session 10. Jun 20 19:44:47.554347 systemd[1]: Started sshd@8-172.24.4.229:22-172.24.4.1:48186.service - OpenSSH per-connection server daemon (172.24.4.1:48186). Jun 20 19:44:48.685049 sshd[5777]: Accepted publickey for core from 172.24.4.1 port 48186 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:44:48.687261 sshd-session[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:44:48.696252 systemd-logind[1497]: New session 11 of user core. Jun 20 19:44:48.705393 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:44:49.550195 sshd[5779]: Connection closed by 172.24.4.1 port 48186 Jun 20 19:44:49.548883 sshd-session[5777]: pam_unix(sshd:session): session closed for user core Jun 20 19:44:49.558604 systemd[1]: sshd@8-172.24.4.229:22-172.24.4.1:48186.service: Deactivated successfully. Jun 20 19:44:49.564443 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:44:49.567627 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:44:49.571341 systemd-logind[1497]: Removed session 11. Jun 20 19:44:54.564714 systemd[1]: Started sshd@9-172.24.4.229:22-172.24.4.1:59910.service - OpenSSH per-connection server daemon (172.24.4.1:59910). Jun 20 19:44:55.574308 sshd[5794]: Accepted publickey for core from 172.24.4.1 port 59910 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:44:55.577933 sshd-session[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:44:55.594620 systemd-logind[1497]: New session 12 of user core. Jun 20 19:44:55.600447 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:44:56.417153 sshd[5796]: Connection closed by 172.24.4.1 port 59910 Jun 20 19:44:56.420846 sshd-session[5794]: pam_unix(sshd:session): session closed for user core Jun 20 19:44:56.438871 systemd[1]: sshd@9-172.24.4.229:22-172.24.4.1:59910.service: Deactivated successfully. Jun 20 19:44:56.443784 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:44:56.447675 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:44:56.453027 systemd[1]: Started sshd@10-172.24.4.229:22-172.24.4.1:59920.service - OpenSSH per-connection server daemon (172.24.4.1:59920). Jun 20 19:44:56.457484 systemd-logind[1497]: Removed session 12. Jun 20 19:44:57.748208 sshd[5809]: Accepted publickey for core from 172.24.4.1 port 59920 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:44:57.748879 sshd-session[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:44:57.760837 systemd-logind[1497]: New session 13 of user core. Jun 20 19:44:57.764459 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:44:58.633569 containerd[1512]: time="2025-06-20T19:44:58.633415544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2\" id:\"3da40bce73929f349774f726b456653dca052ed45eb479e6587837c559be55e7\" pid:5827 exited_at:{seconds:1750448698 nanos:630752245}" Jun 20 19:44:58.734002 sshd[5812]: Connection closed by 172.24.4.1 port 59920 Jun 20 19:44:58.735597 sshd-session[5809]: pam_unix(sshd:session): session closed for user core Jun 20 19:44:58.746076 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:44:58.746695 systemd[1]: sshd@10-172.24.4.229:22-172.24.4.1:59920.service: Deactivated successfully. Jun 20 19:44:58.751152 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:44:58.756239 systemd-logind[1497]: Removed session 13. Jun 20 19:44:58.758978 systemd[1]: Started sshd@11-172.24.4.229:22-172.24.4.1:59928.service - OpenSSH per-connection server daemon (172.24.4.1:59928). Jun 20 19:44:59.892592 sshd[5847]: Accepted publickey for core from 172.24.4.1 port 59928 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:44:59.894706 sshd-session[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:44:59.906263 systemd-logind[1497]: New session 14 of user core. Jun 20 19:44:59.915502 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:45:00.660430 sshd[5849]: Connection closed by 172.24.4.1 port 59928 Jun 20 19:45:00.661147 sshd-session[5847]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:00.666166 systemd[1]: sshd@11-172.24.4.229:22-172.24.4.1:59928.service: Deactivated successfully. Jun 20 19:45:00.666612 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:45:00.670066 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:45:00.675144 systemd-logind[1497]: Removed session 14. Jun 20 19:45:01.151921 containerd[1512]: time="2025-06-20T19:45:01.151777935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d\" id:\"a9811e4a516441f111880d0b54b297170b6d740e73e2afadf4153a37c61fe697\" pid:5871 exited_at:{seconds:1750448701 nanos:150166348}" Jun 20 19:45:05.468014 containerd[1512]: time="2025-06-20T19:45:05.467913383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2\" id:\"b7979f634a30642536e9615541cb0dcfa543f8e76c65120e764eb385389f9606\" pid:5898 exited_at:{seconds:1750448705 nanos:467226529}" Jun 20 19:45:05.686537 systemd[1]: Started sshd@12-172.24.4.229:22-172.24.4.1:59150.service - OpenSSH per-connection server daemon (172.24.4.1:59150). Jun 20 19:45:06.974157 sshd[5909]: Accepted publickey for core from 172.24.4.1 port 59150 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:06.978496 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:06.990621 systemd-logind[1497]: New session 15 of user core. Jun 20 19:45:06.997440 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:45:07.798583 sshd[5914]: Connection closed by 172.24.4.1 port 59150 Jun 20 19:45:07.798417 sshd-session[5909]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:07.803604 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:45:07.803786 systemd[1]: sshd@12-172.24.4.229:22-172.24.4.1:59150.service: Deactivated successfully. Jun 20 19:45:07.807467 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:45:07.811110 systemd-logind[1497]: Removed session 15. Jun 20 19:45:08.432549 kubelet[2775]: I0620 19:45:08.432488 2775 scope.go:117] "RemoveContainer" containerID="9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c" Jun 20 19:45:08.438614 containerd[1512]: time="2025-06-20T19:45:08.438574976Z" level=info msg="RemoveContainer for \"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\"" Jun 20 19:45:08.448664 containerd[1512]: time="2025-06-20T19:45:08.448613760Z" level=info msg="RemoveContainer for \"9e1acd49da6cd3542da806aeba543b30056d01890f7845ed2c384fe76b3dcf7c\" returns successfully" Jun 20 19:45:08.450291 containerd[1512]: time="2025-06-20T19:45:08.450256847Z" level=info msg="StopPodSandbox for \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\"" Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.529 [WARNING][5937] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.531 [INFO][5937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.531 [INFO][5937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" iface="eth0" netns="" Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.531 [INFO][5937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.531 [INFO][5937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.595 [INFO][5944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.595 [INFO][5944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.595 [INFO][5944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.616 [WARNING][5944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.616 [INFO][5944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.619 [INFO][5944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:45:08.623878 containerd[1512]: 2025-06-20 19:45:08.621 [INFO][5937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:45:08.625231 containerd[1512]: time="2025-06-20T19:45:08.624455026Z" level=info msg="TearDown network for sandbox \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" successfully" Jun 20 19:45:08.625231 containerd[1512]: time="2025-06-20T19:45:08.624482338Z" level=info msg="StopPodSandbox for \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" returns successfully" Jun 20 19:45:08.625542 containerd[1512]: time="2025-06-20T19:45:08.625504524Z" level=info msg="RemovePodSandbox for \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\"" Jun 20 19:45:08.625597 containerd[1512]: time="2025-06-20T19:45:08.625559087Z" level=info msg="Forcibly stopping sandbox \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\"" Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.686 [WARNING][5958] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.686 [INFO][5958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.686 [INFO][5958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" iface="eth0" netns="" Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.686 [INFO][5958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.686 [INFO][5958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.730 [INFO][5965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.730 [INFO][5965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.730 [INFO][5965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.738 [WARNING][5965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.738 [INFO][5965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" HandleID="k8s-pod-network.41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--j7slj-eth0" Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.740 [INFO][5965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:45:08.747276 containerd[1512]: 2025-06-20 19:45:08.742 [INFO][5958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597" Jun 20 19:45:08.747276 containerd[1512]: time="2025-06-20T19:45:08.747032064Z" level=info msg="TearDown network for sandbox \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" successfully" Jun 20 19:45:08.753493 containerd[1512]: time="2025-06-20T19:45:08.753455422Z" level=info msg="Ensure that sandbox 41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597 in task-service has been cleanup successfully" Jun 20 19:45:08.758068 containerd[1512]: time="2025-06-20T19:45:08.758011852Z" level=info msg="RemovePodSandbox \"41429ae548c17ac2a19bc31aac0711b5e124d501822e782fc2083d93a3e11597\" returns successfully" Jun 20 19:45:08.758787 containerd[1512]: time="2025-06-20T19:45:08.758744052Z" level=info msg="StopPodSandbox for \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\"" Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.808 [WARNING][5979] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.808 [INFO][5979] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.808 [INFO][5979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" iface="eth0" netns="" Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.808 [INFO][5979] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.808 [INFO][5979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.851 [INFO][5987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.851 [INFO][5987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.851 [INFO][5987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.859 [WARNING][5987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.859 [INFO][5987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.863 [INFO][5987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:45:08.867230 containerd[1512]: 2025-06-20 19:45:08.864 [INFO][5979] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:45:08.867230 containerd[1512]: time="2025-06-20T19:45:08.866461229Z" level=info msg="TearDown network for sandbox \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" successfully" Jun 20 19:45:08.867230 containerd[1512]: time="2025-06-20T19:45:08.866486446Z" level=info msg="StopPodSandbox for \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" returns successfully" Jun 20 19:45:08.867230 containerd[1512]: time="2025-06-20T19:45:08.867049287Z" level=info msg="RemovePodSandbox for \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\"" Jun 20 19:45:08.867230 containerd[1512]: time="2025-06-20T19:45:08.867089112Z" level=info msg="Forcibly stopping sandbox \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\"" Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:08.944 [WARNING][6001] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" WorkloadEndpoint="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:08.944 [INFO][6001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:08.944 [INFO][6001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" iface="eth0" netns="" Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:08.944 [INFO][6001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:08.944 [INFO][6001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:08.992 [INFO][6009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:08.992 [INFO][6009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:08.993 [INFO][6009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:09.026 [WARNING][6009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:09.026 [INFO][6009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" HandleID="k8s-pod-network.50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Workload="ci--4344--1--0--8--afb8bdccbb.novalocal-k8s-calico--apiserver--8f5b5557d--6rptn-eth0" Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:09.029 [INFO][6009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:45:09.036396 containerd[1512]: 2025-06-20 19:45:09.032 [INFO][6001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738" Jun 20 19:45:09.038625 containerd[1512]: time="2025-06-20T19:45:09.036574288Z" level=info msg="TearDown network for sandbox \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" successfully" Jun 20 19:45:09.040726 containerd[1512]: time="2025-06-20T19:45:09.040702620Z" level=info msg="Ensure that sandbox 50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738 in task-service has been cleanup successfully" Jun 20 19:45:09.045883 containerd[1512]: time="2025-06-20T19:45:09.045856466Z" level=info msg="RemovePodSandbox \"50fe678aba8520d437c994aad9a4b2b61028f04cc9660c87b65ed541604aa738\" returns successfully" Jun 20 19:45:09.145038 containerd[1512]: time="2025-06-20T19:45:09.144957695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352\" id:\"ca820e14d308f5fa465497e6008ef38343d3214640c707e5fbe0dd1dcb5bd4c2\" pid:6027 exited_at:{seconds:1750448709 nanos:144451450}" Jun 20 19:45:12.810301 systemd[1]: Started sshd@13-172.24.4.229:22-172.24.4.1:59156.service - OpenSSH per-connection server daemon (172.24.4.1:59156). Jun 20 19:45:14.081627 sshd[6037]: Accepted publickey for core from 172.24.4.1 port 59156 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:14.084005 sshd-session[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:14.098253 systemd-logind[1497]: New session 16 of user core. Jun 20 19:45:14.105435 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:45:14.849454 sshd[6039]: Connection closed by 172.24.4.1 port 59156 Jun 20 19:45:14.850188 sshd-session[6037]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:14.854654 systemd[1]: sshd@13-172.24.4.229:22-172.24.4.1:59156.service: Deactivated successfully. Jun 20 19:45:14.860016 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:45:14.861767 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:45:14.865878 systemd-logind[1497]: Removed session 16. Jun 20 19:45:19.871313 systemd[1]: Started sshd@14-172.24.4.229:22-172.24.4.1:45010.service - OpenSSH per-connection server daemon (172.24.4.1:45010). Jun 20 19:45:20.980397 sshd[6058]: Accepted publickey for core from 172.24.4.1 port 45010 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:20.982775 sshd-session[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:20.988748 systemd-logind[1497]: New session 17 of user core. Jun 20 19:45:20.996592 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:45:21.172740 containerd[1512]: time="2025-06-20T19:45:21.172693311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352\" id:\"ce78145fdf3273bc0e6c81b40413821dbc4f7bcf281058677a91391856effc33\" pid:6073 exited_at:{seconds:1750448721 nanos:170866998}" Jun 20 19:45:21.709273 sshd[6060]: Connection closed by 172.24.4.1 port 45010 Jun 20 19:45:21.713299 sshd-session[6058]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:21.729462 systemd[1]: sshd@14-172.24.4.229:22-172.24.4.1:45010.service: Deactivated successfully. Jun 20 19:45:21.733254 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:45:21.734964 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:45:21.740962 systemd[1]: Started sshd@15-172.24.4.229:22-172.24.4.1:45024.service - OpenSSH per-connection server daemon (172.24.4.1:45024). Jun 20 19:45:21.744522 systemd-logind[1497]: Removed session 17. Jun 20 19:45:23.046013 sshd[6093]: Accepted publickey for core from 172.24.4.1 port 45024 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:23.047954 sshd-session[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:23.057678 systemd-logind[1497]: New session 18 of user core. Jun 20 19:45:23.065521 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:45:24.278233 sshd[6095]: Connection closed by 172.24.4.1 port 45024 Jun 20 19:45:24.276957 sshd-session[6093]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:24.290156 systemd[1]: sshd@15-172.24.4.229:22-172.24.4.1:45024.service: Deactivated successfully. Jun 20 19:45:24.294155 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:45:24.295578 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:45:24.302472 systemd[1]: Started sshd@16-172.24.4.229:22-172.24.4.1:48716.service - OpenSSH per-connection server daemon (172.24.4.1:48716). Jun 20 19:45:24.306124 systemd-logind[1497]: Removed session 18. Jun 20 19:45:25.545213 sshd[6105]: Accepted publickey for core from 172.24.4.1 port 48716 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:25.547726 sshd-session[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:25.596491 systemd-logind[1497]: New session 19 of user core. Jun 20 19:45:25.604428 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:45:27.897738 sshd[6107]: Connection closed by 172.24.4.1 port 48716 Jun 20 19:45:27.898031 sshd-session[6105]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:27.909601 systemd[1]: sshd@16-172.24.4.229:22-172.24.4.1:48716.service: Deactivated successfully. Jun 20 19:45:27.914313 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:45:27.917338 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:45:27.920588 systemd-logind[1497]: Removed session 19. Jun 20 19:45:27.924300 systemd[1]: Started sshd@17-172.24.4.229:22-172.24.4.1:48726.service - OpenSSH per-connection server daemon (172.24.4.1:48726). Jun 20 19:45:28.366476 containerd[1512]: time="2025-06-20T19:45:28.365828604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2\" id:\"5394a4265878d83c0fb2b2f9e49f3657581f87dd2a48ed05c0014b19ce5c6edb\" pid:6150 exited_at:{seconds:1750448728 nanos:363054082}" Jun 20 19:45:29.143553 sshd[6136]: Accepted publickey for core from 172.24.4.1 port 48726 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:29.146057 sshd-session[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:29.158970 systemd-logind[1497]: New session 20 of user core. Jun 20 19:45:29.167613 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:45:30.137989 sshd[6160]: Connection closed by 172.24.4.1 port 48726 Jun 20 19:45:30.138590 sshd-session[6136]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:30.151401 systemd[1]: sshd@17-172.24.4.229:22-172.24.4.1:48726.service: Deactivated successfully. Jun 20 19:45:30.155483 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:45:30.157455 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:45:30.164563 systemd[1]: Started sshd@18-172.24.4.229:22-172.24.4.1:48742.service - OpenSSH per-connection server daemon (172.24.4.1:48742). Jun 20 19:45:30.167528 systemd-logind[1497]: Removed session 20. Jun 20 19:45:31.021197 containerd[1512]: time="2025-06-20T19:45:31.021011625Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d\" id:\"a08c43117939f9bd68beb046792d25662deaa23a561822a1ea30014eeed47f64\" pid:6185 exited_at:{seconds:1750448731 nanos:15746365}" Jun 20 19:45:31.527769 sshd[6170]: Accepted publickey for core from 172.24.4.1 port 48742 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:31.531518 sshd-session[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:31.542701 systemd-logind[1497]: New session 21 of user core. Jun 20 19:45:31.548635 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:45:32.249154 sshd[6197]: Connection closed by 172.24.4.1 port 48742 Jun 20 19:45:32.249906 sshd-session[6170]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:32.253813 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:45:32.256935 systemd[1]: sshd@18-172.24.4.229:22-172.24.4.1:48742.service: Deactivated successfully. Jun 20 19:45:32.261854 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:45:32.265295 systemd-logind[1497]: Removed session 21. Jun 20 19:45:37.266468 systemd[1]: Started sshd@19-172.24.4.229:22-172.24.4.1:45332.service - OpenSSH per-connection server daemon (172.24.4.1:45332). Jun 20 19:45:38.499279 sshd[6218]: Accepted publickey for core from 172.24.4.1 port 45332 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:38.501090 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:38.515409 systemd-logind[1497]: New session 22 of user core. Jun 20 19:45:38.522351 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:45:39.047562 containerd[1512]: time="2025-06-20T19:45:39.047441814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352\" id:\"e38afb1c0bcb72b3842da2f6b982143b32d7c4e8c58b854d9350c23303fbaaf7\" pid:6240 exited_at:{seconds:1750448739 nanos:45285607}" Jun 20 19:45:39.268638 sshd[6220]: Connection closed by 172.24.4.1 port 45332 Jun 20 19:45:39.268503 sshd-session[6218]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:39.272955 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:45:39.275074 systemd[1]: sshd@19-172.24.4.229:22-172.24.4.1:45332.service: Deactivated successfully. Jun 20 19:45:39.278710 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:45:39.283705 systemd-logind[1497]: Removed session 22. Jun 20 19:45:44.281433 systemd[1]: Started sshd@20-172.24.4.229:22-172.24.4.1:51414.service - OpenSSH per-connection server daemon (172.24.4.1:51414). Jun 20 19:45:45.473079 sshd[6267]: Accepted publickey for core from 172.24.4.1 port 51414 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:45.475672 sshd-session[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:45.483403 systemd-logind[1497]: New session 23 of user core. Jun 20 19:45:45.489373 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:45:46.201000 sshd[6271]: Connection closed by 172.24.4.1 port 51414 Jun 20 19:45:46.203255 sshd-session[6267]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:46.208922 systemd[1]: sshd@20-172.24.4.229:22-172.24.4.1:51414.service: Deactivated successfully. Jun 20 19:45:46.209114 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:45:46.213145 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:45:46.217751 systemd-logind[1497]: Removed session 23. Jun 20 19:45:51.229548 systemd[1]: Started sshd@21-172.24.4.229:22-172.24.4.1:51418.service - OpenSSH per-connection server daemon (172.24.4.1:51418). Jun 20 19:45:52.493827 sshd[6282]: Accepted publickey for core from 172.24.4.1 port 51418 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:52.497572 sshd-session[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:52.511287 systemd-logind[1497]: New session 24 of user core. Jun 20 19:45:52.517753 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:45:53.220900 sshd[6284]: Connection closed by 172.24.4.1 port 51418 Jun 20 19:45:53.221679 sshd-session[6282]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:53.227850 systemd[1]: sshd@21-172.24.4.229:22-172.24.4.1:51418.service: Deactivated successfully. Jun 20 19:45:53.228089 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:45:53.231902 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:45:53.237613 systemd-logind[1497]: Removed session 24. Jun 20 19:45:58.238317 systemd[1]: Started sshd@22-172.24.4.229:22-172.24.4.1:54370.service - OpenSSH per-connection server daemon (172.24.4.1:54370). Jun 20 19:45:58.377346 containerd[1512]: time="2025-06-20T19:45:58.377063597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2\" id:\"d4ca5bf1e10ca777ace1bf263c2ebff61d07a001d0179dce143239a07f806ecb\" pid:6308 exited_at:{seconds:1750448758 nanos:372977466}" Jun 20 19:45:59.473156 sshd[6315]: Accepted publickey for core from 172.24.4.1 port 54370 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:45:59.477753 sshd-session[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:59.490421 systemd-logind[1497]: New session 25 of user core. Jun 20 19:45:59.495431 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:46:00.200109 sshd[6321]: Connection closed by 172.24.4.1 port 54370 Jun 20 19:46:00.199709 sshd-session[6315]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:00.205166 systemd[1]: sshd@22-172.24.4.229:22-172.24.4.1:54370.service: Deactivated successfully. Jun 20 19:46:00.207890 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:46:00.209833 systemd-logind[1497]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:46:00.211497 systemd-logind[1497]: Removed session 25. Jun 20 19:46:01.071636 containerd[1512]: time="2025-06-20T19:46:01.071578052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbdfcb87e10cdc88a7f48ee395e78b23eb8d638132b849c6dc5ee775b156fa8d\" id:\"2e5f2a6c35e2eadb0eb9f83070cd6fc18a6bfb3677fb333d298d60a6e95e3cf8\" pid:6344 exited_at:{seconds:1750448761 nanos:70947424}" Jun 20 19:46:05.216154 systemd[1]: Started sshd@23-172.24.4.229:22-172.24.4.1:56838.service - OpenSSH per-connection server daemon (172.24.4.1:56838). Jun 20 19:46:05.423399 containerd[1512]: time="2025-06-20T19:46:05.423350795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"149c6e6bcfde8d6ea978c4b8034295e49e9be1880f8d3a1ddc22d9d7e2dcc8b2\" id:\"acaedd06cdee27d339eb5cf41c0073be95372f58a121a91069151eb6839472c0\" pid:6372 exited_at:{seconds:1750448765 nanos:422259381}" Jun 20 19:46:07.070930 sshd[6357]: Accepted publickey for core from 172.24.4.1 port 56838 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:46:07.073348 sshd-session[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:07.082257 systemd-logind[1497]: New session 26 of user core. Jun 20 19:46:07.088428 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:46:07.823736 sshd[6383]: Connection closed by 172.24.4.1 port 56838 Jun 20 19:46:07.824667 sshd-session[6357]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:07.831704 systemd[1]: sshd@23-172.24.4.229:22-172.24.4.1:56838.service: Deactivated successfully. Jun 20 19:46:07.836143 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:46:07.838010 systemd-logind[1497]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:46:07.840320 systemd-logind[1497]: Removed session 26. Jun 20 19:46:09.111084 containerd[1512]: time="2025-06-20T19:46:09.111022476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"130dcc94320b254c27b5a6c4c8f27e744743889e4f08166fdf7daecd4b223352\" id:\"0cf6cb5455878f60fcb515f8d703d8294a8f7543a4b1b4fb3793eb3b3941563f\" pid:6409 exited_at:{seconds:1750448769 nanos:110473337}"